00:00:00.001 Started by upstream project "autotest-per-patch" build number 132726 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.189 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.248 Using shallow fetch with depth 1 00:00:00.248 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.248 > git --version # timeout=10 00:00:00.308 > git --version # 'git version 2.39.2' 00:00:00.308 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.337 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.337 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.913 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.931 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.945 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.945 > git config core.sparsecheckout # timeout=10 00:00:04.957 > git read-tree -mu HEAD # timeout=10 00:00:04.977 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.006 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.006 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.145 [Pipeline] Start of Pipeline 00:00:05.158 [Pipeline] library 00:00:05.159 Loading library shm_lib@master 00:00:05.160 Library shm_lib@master is cached. Copying from home. 00:00:05.173 [Pipeline] node 00:00:05.183 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.184 [Pipeline] { 00:00:05.193 [Pipeline] catchError 00:00:05.194 [Pipeline] { 00:00:05.205 [Pipeline] wrap 00:00:05.212 [Pipeline] { 00:00:05.219 [Pipeline] stage 00:00:05.220 [Pipeline] { (Prologue) 00:00:05.234 [Pipeline] echo 00:00:05.235 Node: VM-host-SM9 00:00:05.241 [Pipeline] cleanWs 00:00:05.249 [WS-CLEANUP] Deleting project workspace... 00:00:05.249 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.255 [WS-CLEANUP] done 00:00:05.433 [Pipeline] setCustomBuildProperty 00:00:05.542 [Pipeline] httpRequest 00:00:05.902 [Pipeline] echo 00:00:05.903 Sorcerer 10.211.164.101 is alive 00:00:05.911 [Pipeline] retry 00:00:05.912 [Pipeline] { 00:00:05.924 [Pipeline] httpRequest 00:00:05.928 HttpMethod: GET 00:00:05.929 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.930 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.935 Response Code: HTTP/1.1 200 OK 00:00:05.936 Success: Status code 200 is in the accepted range: 200,404 00:00:05.937 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.911 [Pipeline] } 00:00:14.929 [Pipeline] // retry 00:00:14.936 [Pipeline] sh 00:00:15.214 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.229 [Pipeline] httpRequest 00:00:15.656 [Pipeline] echo 00:00:15.659 Sorcerer 10.211.164.101 is alive 00:00:15.669 [Pipeline] retry 00:00:15.671 [Pipeline] { 00:00:15.686 [Pipeline] httpRequest 00:00:15.691 HttpMethod: GET 00:00:15.692 URL: http://10.211.164.101/packages/spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:00:15.692 Sending request to url: http://10.211.164.101/packages/spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:00:15.695 Response Code: HTTP/1.1 200 OK 00:00:15.695 Success: Status code 200 is in the accepted range: 200,404 00:00:15.696 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:06:09.612 [Pipeline] } 00:06:09.630 [Pipeline] // retry 00:06:09.638 [Pipeline] sh 00:06:09.919 + tar --no-same-owner -xf spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:06:13.217 [Pipeline] sh 00:06:13.499 + git -C spdk log --oneline -n5 00:06:13.499 cf089b398 thread: fd_group-based interrupts 00:06:13.499 8a4656bc1 thread: move interrupt allocation to a function 00:06:13.499 09908f908 util: add method for setting fd_group's wrapper 00:06:13.499 697130caf util: multi-level fd_group nesting 00:06:13.499 6696ebaae util: keep track of nested child fd_groups 00:06:13.517 [Pipeline] writeFile 00:06:13.532 [Pipeline] sh 00:06:13.811 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:06:13.820 [Pipeline] sh 00:06:14.093 + cat autorun-spdk.conf 00:06:14.093 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:14.093 SPDK_TEST_NVME=1 00:06:14.093 SPDK_TEST_FTL=1 00:06:14.093 SPDK_TEST_ISAL=1 00:06:14.093 SPDK_RUN_ASAN=1 00:06:14.093 SPDK_RUN_UBSAN=1 00:06:14.093 SPDK_TEST_XNVME=1 00:06:14.093 SPDK_TEST_NVME_FDP=1 00:06:14.093 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:14.099 RUN_NIGHTLY=0 00:06:14.101 [Pipeline] } 00:06:14.114 [Pipeline] // stage 00:06:14.130 [Pipeline] stage 00:06:14.133 [Pipeline] { (Run VM) 00:06:14.147 [Pipeline] sh 00:06:14.425 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:06:14.425 + echo 'Start stage prepare_nvme.sh' 00:06:14.425 Start stage prepare_nvme.sh 00:06:14.426 + [[ -n 5 ]] 00:06:14.426 + disk_prefix=ex5 00:06:14.426 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:06:14.426 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:06:14.426 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:06:14.426 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:14.426 ++ SPDK_TEST_NVME=1 00:06:14.426 ++ SPDK_TEST_FTL=1 00:06:14.426 ++ SPDK_TEST_ISAL=1 00:06:14.426 ++ SPDK_RUN_ASAN=1 00:06:14.426 ++ SPDK_RUN_UBSAN=1 00:06:14.426 ++ SPDK_TEST_XNVME=1 00:06:14.426 ++ SPDK_TEST_NVME_FDP=1 00:06:14.426 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:14.426 ++ RUN_NIGHTLY=0 00:06:14.426 + cd /var/jenkins/workspace/nvme-vg-autotest 00:06:14.426 + nvme_files=() 00:06:14.426 + declare -A nvme_files 00:06:14.426 + backend_dir=/var/lib/libvirt/images/backends 00:06:14.426 + nvme_files['nvme.img']=5G 00:06:14.426 + nvme_files['nvme-cmb.img']=5G 00:06:14.426 + nvme_files['nvme-multi0.img']=4G 00:06:14.426 + nvme_files['nvme-multi1.img']=4G 00:06:14.426 + nvme_files['nvme-multi2.img']=4G 00:06:14.426 + nvme_files['nvme-openstack.img']=8G 00:06:14.426 + nvme_files['nvme-zns.img']=5G 00:06:14.426 + (( SPDK_TEST_NVME_PMR == 1 )) 00:06:14.426 + (( SPDK_TEST_FTL == 1 )) 00:06:14.426 + nvme_files["nvme-ftl.img"]=6G 00:06:14.426 + (( SPDK_TEST_NVME_FDP == 1 )) 00:06:14.426 + nvme_files["nvme-fdp.img"]=1G 00:06:14.426 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:06:14.426 + for nvme in "${!nvme_files[@]}" 00:06:14.426 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:06:14.426 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:06:14.426 + for nvme in "${!nvme_files[@]}" 00:06:14.426 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:06:14.426 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:06:14.426 + for nvme in "${!nvme_files[@]}" 00:06:14.426 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:06:14.426 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:06:14.426 + for nvme in "${!nvme_files[@]}" 00:06:14.426 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:06:14.683 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:06:14.683 + for nvme in "${!nvme_files[@]}" 00:06:14.683 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:06:14.683 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:06:14.683 + for nvme in "${!nvme_files[@]}" 00:06:14.683 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:06:14.683 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:06:14.683 + for nvme in "${!nvme_files[@]}" 00:06:14.683 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:06:14.683 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:06:14.683 + for nvme in "${!nvme_files[@]}" 00:06:14.683 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:06:14.683 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:06:14.683 + for nvme in "${!nvme_files[@]}" 00:06:14.683 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:06:14.941 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:06:14.941 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:06:14.941 + echo 'End stage prepare_nvme.sh' 00:06:14.941 End stage prepare_nvme.sh 00:06:14.950 [Pipeline] sh 00:06:15.227 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:06:15.227 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:06:15.227 00:06:15.227 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:06:15.227 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:06:15.227 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:06:15.227 HELP=0 00:06:15.227 DRY_RUN=0 00:06:15.227 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:06:15.227 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:06:15.227 NVME_AUTO_CREATE=0 00:06:15.227 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:06:15.227 NVME_CMB=,,,, 00:06:15.227 NVME_PMR=,,,, 00:06:15.227 NVME_ZNS=,,,, 00:06:15.227 NVME_MS=true,,,, 00:06:15.227 NVME_FDP=,,,on, 00:06:15.227 SPDK_VAGRANT_DISTRO=fedora39 00:06:15.227 SPDK_VAGRANT_VMCPU=10 00:06:15.227 SPDK_VAGRANT_VMRAM=12288 00:06:15.227 SPDK_VAGRANT_PROVIDER=libvirt 00:06:15.227 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:06:15.227 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:06:15.227 SPDK_OPENSTACK_NETWORK=0 00:06:15.227 VAGRANT_PACKAGE_BOX=0 00:06:15.227 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:06:15.227 FORCE_DISTRO=true 00:06:15.227 VAGRANT_BOX_VERSION= 00:06:15.227 EXTRA_VAGRANTFILES= 00:06:15.227 NIC_MODEL=e1000 00:06:15.227 00:06:15.227 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:06:15.227 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:06:18.507 Bringing machine 'default' up with 'libvirt' provider... 00:06:19.073 ==> default: Creating image (snapshot of base box volume). 00:06:19.332 ==> default: Creating domain with the following settings... 00:06:19.332 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733490025_b22aedec37cb1c2f1370 00:06:19.332 ==> default: -- Domain type: kvm 00:06:19.332 ==> default: -- Cpus: 10 00:06:19.332 ==> default: -- Feature: acpi 00:06:19.332 ==> default: -- Feature: apic 00:06:19.332 ==> default: -- Feature: pae 00:06:19.332 ==> default: -- Memory: 12288M 00:06:19.332 ==> default: -- Memory Backing: hugepages: 00:06:19.332 ==> default: -- Management MAC: 00:06:19.332 ==> default: -- Loader: 00:06:19.332 ==> default: -- Nvram: 00:06:19.332 ==> default: -- Base box: spdk/fedora39 00:06:19.332 ==> default: -- Storage pool: default 00:06:19.332 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733490025_b22aedec37cb1c2f1370.img (20G) 00:06:19.332 ==> default: -- Volume Cache: default 00:06:19.332 ==> default: -- Kernel: 00:06:19.332 ==> default: -- Initrd: 00:06:19.332 ==> default: -- Graphics Type: vnc 00:06:19.332 ==> default: -- Graphics Port: -1 00:06:19.332 ==> default: -- Graphics IP: 127.0.0.1 00:06:19.332 ==> default: -- Graphics Password: Not defined 00:06:19.332 ==> default: -- Video Type: cirrus 00:06:19.332 ==> default: -- Video VRAM: 9216 00:06:19.332 ==> default: -- Sound Type: 00:06:19.332 ==> default: -- Keymap: en-us 00:06:19.332 ==> default: -- TPM Path: 00:06:19.332 ==> default: -- INPUT: type=mouse, bus=ps2 00:06:19.332 ==> default: -- Command line args: 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:06:19.332 ==> default: -> value=-drive, 00:06:19.332 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:06:19.332 ==> default: -> value=-drive, 00:06:19.332 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:06:19.332 ==> default: -> value=-drive, 00:06:19.332 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:19.332 ==> default: -> value=-drive, 00:06:19.332 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:19.332 ==> default: -> value=-drive, 00:06:19.332 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:06:19.332 ==> default: -> value=-drive, 00:06:19.332 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:06:19.332 ==> default: -> value=-device, 00:06:19.332 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:19.332 ==> default: Creating shared folders metadata... 00:06:19.332 ==> default: Starting domain. 00:06:20.711 ==> default: Waiting for domain to get an IP address... 00:06:38.803 ==> default: Waiting for SSH to become available... 00:06:38.803 ==> default: Configuring and enabling network interfaces... 00:06:41.363 default: SSH address: 192.168.121.130:22 00:06:41.363 default: SSH username: vagrant 00:06:41.363 default: SSH auth method: private key 00:06:43.898 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:52.003 ==> default: Mounting SSHFS shared folder... 00:06:52.937 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:52.937 ==> default: Checking Mount.. 00:06:54.315 ==> default: Folder Successfully Mounted! 00:06:54.315 ==> default: Running provisioner: file... 00:06:54.882 default: ~/.gitconfig => .gitconfig 00:06:55.447 00:06:55.447 SUCCESS! 00:06:55.447 00:06:55.447 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:55.447 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:55.447 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:55.447 00:06:55.454 [Pipeline] } 00:06:55.469 [Pipeline] // stage 00:06:55.476 [Pipeline] dir 00:06:55.477 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:06:55.478 [Pipeline] { 00:06:55.489 [Pipeline] catchError 00:06:55.490 [Pipeline] { 00:06:55.501 [Pipeline] sh 00:06:55.778 + vagrant ssh-config --host vagrant 00:06:55.778 + sed -ne /^Host/,$p+ 00:06:55.778 tee ssh_conf 00:06:59.962 Host vagrant 00:06:59.962 HostName 192.168.121.130 00:06:59.962 User vagrant 00:06:59.962 Port 22 00:06:59.962 UserKnownHostsFile /dev/null 00:06:59.962 StrictHostKeyChecking no 00:06:59.962 PasswordAuthentication no 00:06:59.963 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:59.963 IdentitiesOnly yes 00:06:59.963 LogLevel FATAL 00:06:59.963 ForwardAgent yes 00:06:59.963 ForwardX11 yes 00:06:59.963 00:07:00.040 [Pipeline] withEnv 00:07:00.043 [Pipeline] { 00:07:00.057 [Pipeline] sh 00:07:00.334 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:07:00.334 source /etc/os-release 00:07:00.334 [[ -e /image.version ]] && img=$(< /image.version) 00:07:00.334 # Minimal, systemd-like check. 00:07:00.334 if [[ -e /.dockerenv ]]; then 00:07:00.334 # Clear garbage from the node's name: 00:07:00.334 # agt-er_autotest_547-896 -> autotest_547-896 00:07:00.334 # $HOSTNAME is the actual container id 00:07:00.334 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:07:00.334 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:07:00.334 # We can assume this is a mount from a host where container is running, 00:07:00.334 # so fetch its hostname to easily identify the target swarm worker. 00:07:00.334 container="$(< /etc/hostname) ($agent)" 00:07:00.334 else 00:07:00.334 # Fallback 00:07:00.334 container=$agent 00:07:00.334 fi 00:07:00.334 fi 00:07:00.334 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:07:00.334 00:07:00.344 [Pipeline] } 00:07:00.356 [Pipeline] // withEnv 00:07:00.362 [Pipeline] setCustomBuildProperty 00:07:00.379 [Pipeline] stage 00:07:00.381 [Pipeline] { (Tests) 00:07:00.397 [Pipeline] sh 00:07:00.676 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:07:00.948 [Pipeline] sh 00:07:01.226 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:07:01.239 [Pipeline] timeout 00:07:01.239 Timeout set to expire in 50 min 00:07:01.241 [Pipeline] { 00:07:01.253 [Pipeline] sh 00:07:01.529 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:07:02.093 HEAD is now at cf089b398 thread: fd_group-based interrupts 00:07:02.104 [Pipeline] sh 00:07:02.381 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:07:02.651 [Pipeline] sh 00:07:02.989 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:07:03.003 [Pipeline] sh 00:07:03.283 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:07:03.541 ++ readlink -f spdk_repo 00:07:03.541 + DIR_ROOT=/home/vagrant/spdk_repo 00:07:03.541 + [[ -n /home/vagrant/spdk_repo ]] 00:07:03.541 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:07:03.541 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:07:03.541 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:07:03.541 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:07:03.541 + [[ -d /home/vagrant/spdk_repo/output ]] 00:07:03.541 + [[ nvme-vg-autotest == pkgdep-* ]] 00:07:03.541 + cd /home/vagrant/spdk_repo 00:07:03.541 + source /etc/os-release 00:07:03.541 ++ NAME='Fedora Linux' 00:07:03.541 ++ VERSION='39 (Cloud Edition)' 00:07:03.541 ++ ID=fedora 00:07:03.541 ++ VERSION_ID=39 00:07:03.541 ++ VERSION_CODENAME= 00:07:03.541 ++ PLATFORM_ID=platform:f39 00:07:03.541 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:03.541 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:03.541 ++ LOGO=fedora-logo-icon 00:07:03.541 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:03.541 ++ HOME_URL=https://fedoraproject.org/ 00:07:03.541 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:03.541 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:03.541 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:03.541 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:03.541 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:03.541 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:03.541 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:03.541 ++ SUPPORT_END=2024-11-12 00:07:03.541 ++ VARIANT='Cloud Edition' 00:07:03.541 ++ VARIANT_ID=cloud 00:07:03.541 + uname -a 00:07:03.541 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:07:03.541 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:03.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:04.058 Hugepages 00:07:04.058 node hugesize free / total 00:07:04.058 node0 1048576kB 0 / 0 00:07:04.058 node0 2048kB 0 / 0 00:07:04.058 00:07:04.058 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:04.058 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:04.058 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:04.058 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:04.316 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:04.316 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:04.316 + rm -f /tmp/spdk-ld-path 00:07:04.316 + source autorun-spdk.conf 00:07:04.316 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:04.316 ++ SPDK_TEST_NVME=1 00:07:04.316 ++ SPDK_TEST_FTL=1 00:07:04.316 ++ SPDK_TEST_ISAL=1 00:07:04.316 ++ SPDK_RUN_ASAN=1 00:07:04.316 ++ SPDK_RUN_UBSAN=1 00:07:04.316 ++ SPDK_TEST_XNVME=1 00:07:04.317 ++ SPDK_TEST_NVME_FDP=1 00:07:04.317 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:04.317 ++ RUN_NIGHTLY=0 00:07:04.317 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:04.317 + [[ -n '' ]] 00:07:04.317 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:07:04.317 + for M in /var/spdk/build-*-manifest.txt 00:07:04.317 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:04.317 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:04.317 + for M in /var/spdk/build-*-manifest.txt 00:07:04.317 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:04.317 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:04.317 + for M in /var/spdk/build-*-manifest.txt 00:07:04.317 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:04.317 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:04.317 ++ uname 00:07:04.317 + [[ Linux == \L\i\n\u\x ]] 00:07:04.317 + sudo dmesg -T 00:07:04.317 + sudo dmesg --clear 00:07:04.317 + dmesg_pid=5293 00:07:04.317 + sudo dmesg -Tw 00:07:04.317 + [[ Fedora Linux == FreeBSD ]] 00:07:04.317 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:04.317 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:04.317 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:04.317 + [[ -x /usr/src/fio-static/fio ]] 00:07:04.317 + export FIO_BIN=/usr/src/fio-static/fio 00:07:04.317 + FIO_BIN=/usr/src/fio-static/fio 00:07:04.317 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:04.317 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:04.317 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:04.317 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:04.317 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:04.317 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:04.317 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:04.317 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:04.317 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:04.317 13:01:10 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:04.317 13:01:10 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:04.317 13:01:10 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:04.317 13:01:10 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:07:04.317 13:01:10 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:07:04.317 13:01:10 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:07:04.317 13:01:10 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:07:04.317 13:01:10 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:07:04.317 13:01:10 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:07:04.317 13:01:10 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:07:04.317 13:01:10 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:04.317 13:01:10 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:07:04.317 13:01:10 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:04.317 13:01:10 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:04.576 13:01:10 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:04.576 13:01:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.576 13:01:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:04.576 13:01:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:04.576 13:01:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.576 13:01:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.576 13:01:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.576 13:01:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.576 13:01:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.576 13:01:10 -- paths/export.sh@5 -- $ export PATH 00:07:04.576 13:01:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.576 13:01:10 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:07:04.576 13:01:10 -- common/autobuild_common.sh@493 -- $ date +%s 00:07:04.576 13:01:10 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733490070.XXXXXX 00:07:04.576 13:01:10 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733490070.BFID8b 00:07:04.576 13:01:10 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:07:04.576 13:01:10 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:07:04.576 13:01:10 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:07:04.576 13:01:10 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:07:04.576 13:01:10 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:07:04.576 13:01:10 -- common/autobuild_common.sh@509 -- $ get_config_params 00:07:04.576 13:01:10 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:07:04.576 13:01:10 -- common/autotest_common.sh@10 -- $ set +x 00:07:04.576 13:01:10 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:07:04.576 13:01:10 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:07:04.576 13:01:10 -- pm/common@17 -- $ local monitor 00:07:04.576 13:01:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:04.576 13:01:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:04.576 13:01:10 -- pm/common@25 -- $ sleep 1 00:07:04.576 13:01:10 -- pm/common@21 -- $ date +%s 00:07:04.576 13:01:10 -- pm/common@21 -- $ date +%s 00:07:04.576 13:01:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733490070 00:07:04.576 13:01:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733490070 00:07:04.576 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733490070_collect-vmstat.pm.log 00:07:04.576 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733490070_collect-cpu-load.pm.log 00:07:05.511 13:01:11 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:07:05.511 13:01:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:05.511 13:01:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:05.511 13:01:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:07:05.511 13:01:11 -- spdk/autobuild.sh@16 -- $ date -u 00:07:05.511 Fri Dec 6 01:01:11 PM UTC 2024 00:07:05.511 13:01:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:05.511 v25.01-pre-308-gcf089b398 00:07:05.511 13:01:11 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:07:05.511 13:01:11 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:07:05.511 13:01:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:05.511 13:01:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:05.511 13:01:11 -- common/autotest_common.sh@10 -- $ set +x 00:07:05.511 ************************************ 00:07:05.511 START TEST asan 00:07:05.511 ************************************ 00:07:05.511 using asan 00:07:05.511 13:01:11 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:07:05.511 00:07:05.511 real 0m0.000s 00:07:05.511 user 0m0.000s 00:07:05.511 sys 0m0.000s 00:07:05.511 13:01:11 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:05.511 13:01:11 asan -- common/autotest_common.sh@10 -- $ set +x 00:07:05.511 ************************************ 00:07:05.511 END TEST asan 00:07:05.511 ************************************ 00:07:05.511 13:01:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:05.511 13:01:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:05.511 13:01:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:05.511 13:01:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:05.511 13:01:11 -- common/autotest_common.sh@10 -- $ set +x 00:07:05.511 ************************************ 00:07:05.511 START TEST ubsan 00:07:05.511 ************************************ 00:07:05.511 using ubsan 00:07:05.511 13:01:12 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:07:05.511 00:07:05.511 real 0m0.000s 00:07:05.511 user 0m0.000s 00:07:05.511 sys 0m0.000s 00:07:05.511 13:01:12 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:05.511 13:01:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:05.511 ************************************ 00:07:05.511 END TEST ubsan 00:07:05.511 ************************************ 00:07:05.511 13:01:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:05.511 13:01:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:05.511 13:01:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:05.511 13:01:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:05.769 13:01:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:05.769 13:01:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:05.769 13:01:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:05.769 13:01:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:05.769 13:01:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:07:05.769 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:05.769 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:06.028 Using 'verbs' RDMA provider 00:07:19.628 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:31.882 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:31.882 Creating mk/config.mk...done. 00:07:31.882 Creating mk/cc.flags.mk...done. 00:07:31.882 Type 'make' to build. 00:07:31.882 13:01:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:31.882 13:01:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:31.882 13:01:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:31.882 13:01:38 -- common/autotest_common.sh@10 -- $ set +x 00:07:31.882 ************************************ 00:07:31.882 START TEST make 00:07:31.882 ************************************ 00:07:31.882 13:01:38 make -- common/autotest_common.sh@1129 -- $ make -j10 00:07:32.448 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:07:32.448 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:07:32.448 meson setup builddir \ 00:07:32.448 -Dwith-libaio=enabled \ 00:07:32.448 -Dwith-liburing=enabled \ 00:07:32.448 -Dwith-libvfn=disabled \ 00:07:32.448 -Dwith-spdk=disabled \ 00:07:32.448 -Dexamples=false \ 00:07:32.448 -Dtests=false \ 00:07:32.448 -Dtools=false && \ 00:07:32.448 meson compile -C builddir && \ 00:07:32.448 cd -) 00:07:32.448 make[1]: Nothing to be done for 'all'. 00:07:35.725 The Meson build system 00:07:35.725 Version: 1.5.0 00:07:35.725 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:07:35.725 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:07:35.725 Build type: native build 00:07:35.725 Project name: xnvme 00:07:35.725 Project version: 0.7.5 00:07:35.725 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:35.725 C linker for the host machine: cc ld.bfd 2.40-14 00:07:35.725 Host machine cpu family: x86_64 00:07:35.725 Host machine cpu: x86_64 00:07:35.725 Message: host_machine.system: linux 00:07:35.725 Compiler for C supports arguments -Wno-missing-braces: YES 00:07:35.725 Compiler for C supports arguments -Wno-cast-function-type: YES 00:07:35.725 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:07:35.725 Run-time dependency threads found: YES 00:07:35.725 Has header "setupapi.h" : NO 00:07:35.725 Has header "linux/blkzoned.h" : YES 00:07:35.725 Has header "linux/blkzoned.h" : YES (cached) 00:07:35.725 Has header "libaio.h" : YES 00:07:35.725 Library aio found: YES 00:07:35.725 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:35.725 Run-time dependency liburing found: YES 2.2 00:07:35.725 Dependency libvfn skipped: feature with-libvfn disabled 00:07:35.725 Found CMake: /usr/bin/cmake (3.27.7) 00:07:35.725 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:07:35.725 Subproject spdk : skipped: feature with-spdk disabled 00:07:35.725 Run-time dependency appleframeworks found: NO (tried framework) 00:07:35.725 Run-time dependency appleframeworks found: NO (tried framework) 00:07:35.725 Library rt found: YES 00:07:35.725 Checking for function "clock_gettime" with dependency -lrt: YES 00:07:35.725 Configuring xnvme_config.h using configuration 00:07:35.725 Configuring xnvme.spec using configuration 00:07:35.725 Run-time dependency bash-completion found: YES 2.11 00:07:35.725 Message: Bash-completions: /usr/share/bash-completion/completions 00:07:35.725 Program cp found: YES (/usr/bin/cp) 00:07:35.725 Build targets in project: 3 00:07:35.725 00:07:35.725 xnvme 0.7.5 00:07:35.725 00:07:35.725 Subprojects 00:07:35.725 spdk : NO Feature 'with-spdk' disabled 00:07:35.725 00:07:35.726 User defined options 00:07:35.726 examples : false 00:07:35.726 tests : false 00:07:35.726 tools : false 00:07:35.726 with-libaio : enabled 00:07:35.726 with-liburing: enabled 00:07:35.726 with-libvfn : disabled 00:07:35.726 with-spdk : disabled 00:07:35.726 00:07:35.726 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:36.290 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:07:36.290 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:07:36.290 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:07:36.290 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:07:36.290 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:07:36.290 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:07:36.290 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:07:36.548 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:07:36.548 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:07:36.548 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:07:36.548 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:07:36.548 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:07:36.548 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:07:36.548 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:07:36.548 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:07:36.548 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:07:36.549 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:07:36.806 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:07:36.806 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:07:36.806 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:07:36.806 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:07:36.806 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:07:36.806 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:07:36.806 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:07:36.806 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:07:36.806 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:07:36.806 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:07:36.806 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:07:36.806 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:07:36.806 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:07:36.806 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:07:36.806 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:07:36.806 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:07:36.806 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:07:37.063 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:07:37.063 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:07:37.063 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:07:37.063 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:07:37.063 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:07:37.063 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:07:37.063 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:07:37.063 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:07:37.063 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:07:37.063 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:07:37.063 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:07:37.063 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:07:37.063 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:07:37.063 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:07:37.063 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:07:37.063 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:07:37.063 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:07:37.063 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:07:37.063 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:07:37.063 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:07:37.320 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:07:37.320 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:07:37.320 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:07:37.320 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:07:37.320 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:07:37.320 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:07:37.320 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:07:37.320 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:07:37.320 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:07:37.577 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:07:37.577 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:07:37.577 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:07:37.577 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:07:37.577 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:07:37.577 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:07:37.577 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:07:37.577 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:07:37.577 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:07:37.835 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:07:37.835 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:07:38.401 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:07:38.401 [75/76] Linking static target lib/libxnvme.a 00:07:38.401 [76/76] Linking target lib/libxnvme.so.0.7.5 00:07:38.401 INFO: autodetecting backend as ninja 00:07:38.401 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:07:38.659 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:07:53.535 The Meson build system 00:07:53.535 Version: 1.5.0 00:07:53.535 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:53.535 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:53.535 Build type: native build 00:07:53.535 Program cat found: YES (/usr/bin/cat) 00:07:53.535 Project name: DPDK 00:07:53.535 Project version: 24.03.0 00:07:53.535 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:53.535 C linker for the host machine: cc ld.bfd 2.40-14 00:07:53.535 Host machine cpu family: x86_64 00:07:53.535 Host machine cpu: x86_64 00:07:53.535 Message: ## Building in Developer Mode ## 00:07:53.535 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:53.535 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:53.535 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:53.535 Program python3 found: YES (/usr/bin/python3) 00:07:53.535 Program cat found: YES (/usr/bin/cat) 00:07:53.535 Compiler for C supports arguments -march=native: YES 00:07:53.535 Checking for size of "void *" : 8 00:07:53.535 Checking for size of "void *" : 8 (cached) 00:07:53.535 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:53.535 Library m found: YES 00:07:53.535 Library numa found: YES 00:07:53.535 Has header "numaif.h" : YES 00:07:53.535 Library fdt found: NO 00:07:53.535 Library execinfo found: NO 00:07:53.535 Has header "execinfo.h" : YES 00:07:53.535 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:53.535 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:53.535 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:53.535 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:53.535 Run-time dependency openssl found: YES 3.1.1 00:07:53.535 Run-time dependency libpcap found: YES 1.10.4 00:07:53.535 Has header "pcap.h" with dependency libpcap: YES 00:07:53.535 Compiler for C supports arguments -Wcast-qual: YES 00:07:53.535 Compiler for C supports arguments -Wdeprecated: YES 00:07:53.535 Compiler for C supports arguments -Wformat: YES 00:07:53.535 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:53.535 Compiler for C supports arguments -Wformat-security: NO 00:07:53.535 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:53.535 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:53.535 Compiler for C supports arguments -Wnested-externs: YES 00:07:53.535 Compiler for C supports arguments -Wold-style-definition: YES 00:07:53.535 Compiler for C supports arguments -Wpointer-arith: YES 00:07:53.535 Compiler for C supports arguments -Wsign-compare: YES 00:07:53.535 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:53.535 Compiler for C supports arguments -Wundef: YES 00:07:53.535 Compiler for C supports arguments -Wwrite-strings: YES 00:07:53.535 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:53.535 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:53.535 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:53.535 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:53.535 Program objdump found: YES (/usr/bin/objdump) 00:07:53.535 Compiler for C supports arguments -mavx512f: YES 00:07:53.535 Checking if "AVX512 checking" compiles: YES 00:07:53.535 Fetching value of define "__SSE4_2__" : 1 00:07:53.535 Fetching value of define "__AES__" : 1 00:07:53.535 Fetching value of define "__AVX__" : 1 00:07:53.535 Fetching value of define "__AVX2__" : 1 00:07:53.535 Fetching value of define "__AVX512BW__" : (undefined) 00:07:53.535 Fetching value of define "__AVX512CD__" : (undefined) 00:07:53.535 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:53.535 Fetching value of define "__AVX512F__" : (undefined) 00:07:53.535 Fetching value of define "__AVX512VL__" : (undefined) 00:07:53.535 Fetching value of define "__PCLMUL__" : 1 00:07:53.535 Fetching value of define "__RDRND__" : 1 00:07:53.535 Fetching value of define "__RDSEED__" : 1 00:07:53.535 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:53.535 Fetching value of define "__znver1__" : (undefined) 00:07:53.535 Fetching value of define "__znver2__" : (undefined) 00:07:53.535 Fetching value of define "__znver3__" : (undefined) 00:07:53.535 Fetching value of define "__znver4__" : (undefined) 00:07:53.535 Library asan found: YES 00:07:53.535 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:53.535 Message: lib/log: Defining dependency "log" 00:07:53.535 Message: lib/kvargs: Defining dependency "kvargs" 00:07:53.535 Message: lib/telemetry: Defining dependency "telemetry" 00:07:53.535 Library rt found: YES 00:07:53.535 Checking for function "getentropy" : NO 00:07:53.535 Message: lib/eal: Defining dependency "eal" 00:07:53.535 Message: lib/ring: Defining dependency "ring" 00:07:53.535 Message: lib/rcu: Defining dependency "rcu" 00:07:53.535 Message: lib/mempool: Defining dependency "mempool" 00:07:53.535 Message: lib/mbuf: Defining dependency "mbuf" 00:07:53.535 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:53.535 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:53.535 Compiler for C supports arguments -mpclmul: YES 00:07:53.535 Compiler for C supports arguments -maes: YES 00:07:53.535 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:53.535 Compiler for C supports arguments -mavx512bw: YES 00:07:53.535 Compiler for C supports arguments -mavx512dq: YES 00:07:53.535 Compiler for C supports arguments -mavx512vl: YES 00:07:53.535 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:53.535 Compiler for C supports arguments -mavx2: YES 00:07:53.535 Compiler for C supports arguments -mavx: YES 00:07:53.535 Message: lib/net: Defining dependency "net" 00:07:53.535 Message: lib/meter: Defining dependency "meter" 00:07:53.535 Message: lib/ethdev: Defining dependency "ethdev" 00:07:53.535 Message: lib/pci: Defining dependency "pci" 00:07:53.535 Message: lib/cmdline: Defining dependency "cmdline" 00:07:53.535 Message: lib/hash: Defining dependency "hash" 00:07:53.535 Message: lib/timer: Defining dependency "timer" 00:07:53.535 Message: lib/compressdev: Defining dependency "compressdev" 00:07:53.535 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:53.535 Message: lib/dmadev: Defining dependency "dmadev" 00:07:53.535 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:53.535 Message: lib/power: Defining dependency "power" 00:07:53.535 Message: lib/reorder: Defining dependency "reorder" 00:07:53.535 Message: lib/security: Defining dependency "security" 00:07:53.535 Has header "linux/userfaultfd.h" : YES 00:07:53.535 Has header "linux/vduse.h" : YES 00:07:53.535 Message: lib/vhost: Defining dependency "vhost" 00:07:53.535 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:53.535 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:53.535 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:53.535 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:53.535 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:53.535 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:53.535 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:53.535 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:53.535 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:53.535 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:53.535 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:53.535 Configuring doxy-api-html.conf using configuration 00:07:53.535 Configuring doxy-api-man.conf using configuration 00:07:53.535 Program mandb found: YES (/usr/bin/mandb) 00:07:53.535 Program sphinx-build found: NO 00:07:53.535 Configuring rte_build_config.h using configuration 00:07:53.535 Message: 00:07:53.536 ================= 00:07:53.536 Applications Enabled 00:07:53.536 ================= 00:07:53.536 00:07:53.536 apps: 00:07:53.536 00:07:53.536 00:07:53.536 Message: 00:07:53.536 ================= 00:07:53.536 Libraries Enabled 00:07:53.536 ================= 00:07:53.536 00:07:53.536 libs: 00:07:53.536 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:53.536 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:53.536 cryptodev, dmadev, power, reorder, security, vhost, 00:07:53.536 00:07:53.536 Message: 00:07:53.536 =============== 00:07:53.536 Drivers Enabled 00:07:53.536 =============== 00:07:53.536 00:07:53.536 common: 00:07:53.536 00:07:53.536 bus: 00:07:53.536 pci, vdev, 00:07:53.536 mempool: 00:07:53.536 ring, 00:07:53.536 dma: 00:07:53.536 00:07:53.536 net: 00:07:53.536 00:07:53.536 crypto: 00:07:53.536 00:07:53.536 compress: 00:07:53.536 00:07:53.536 vdpa: 00:07:53.536 00:07:53.536 00:07:53.536 Message: 00:07:53.536 ================= 00:07:53.536 Content Skipped 00:07:53.536 ================= 00:07:53.536 00:07:53.536 apps: 00:07:53.536 dumpcap: explicitly disabled via build config 00:07:53.536 graph: explicitly disabled via build config 00:07:53.536 pdump: explicitly disabled via build config 00:07:53.536 proc-info: explicitly disabled via build config 00:07:53.536 test-acl: explicitly disabled via build config 00:07:53.536 test-bbdev: explicitly disabled via build config 00:07:53.536 test-cmdline: explicitly disabled via build config 00:07:53.536 test-compress-perf: explicitly disabled via build config 00:07:53.536 test-crypto-perf: explicitly disabled via build config 00:07:53.536 test-dma-perf: explicitly disabled via build config 00:07:53.536 test-eventdev: explicitly disabled via build config 00:07:53.536 test-fib: explicitly disabled via build config 00:07:53.536 test-flow-perf: explicitly disabled via build config 00:07:53.536 test-gpudev: explicitly disabled via build config 00:07:53.536 test-mldev: explicitly disabled via build config 00:07:53.536 test-pipeline: explicitly disabled via build config 00:07:53.536 test-pmd: explicitly disabled via build config 00:07:53.536 test-regex: explicitly disabled via build config 00:07:53.536 test-sad: explicitly disabled via build config 00:07:53.536 test-security-perf: explicitly disabled via build config 00:07:53.536 00:07:53.536 libs: 00:07:53.536 argparse: explicitly disabled via build config 00:07:53.536 metrics: explicitly disabled via build config 00:07:53.536 acl: explicitly disabled via build config 00:07:53.536 bbdev: explicitly disabled via build config 00:07:53.536 bitratestats: explicitly disabled via build config 00:07:53.536 bpf: explicitly disabled via build config 00:07:53.536 cfgfile: explicitly disabled via build config 00:07:53.536 distributor: explicitly disabled via build config 00:07:53.536 efd: explicitly disabled via build config 00:07:53.536 eventdev: explicitly disabled via build config 00:07:53.536 dispatcher: explicitly disabled via build config 00:07:53.536 gpudev: explicitly disabled via build config 00:07:53.536 gro: explicitly disabled via build config 00:07:53.536 gso: explicitly disabled via build config 00:07:53.536 ip_frag: explicitly disabled via build config 00:07:53.536 jobstats: explicitly disabled via build config 00:07:53.536 latencystats: explicitly disabled via build config 00:07:53.536 lpm: explicitly disabled via build config 00:07:53.536 member: explicitly disabled via build config 00:07:53.536 pcapng: explicitly disabled via build config 00:07:53.536 rawdev: explicitly disabled via build config 00:07:53.536 regexdev: explicitly disabled via build config 00:07:53.536 mldev: explicitly disabled via build config 00:07:53.536 rib: explicitly disabled via build config 00:07:53.536 sched: explicitly disabled via build config 00:07:53.536 stack: explicitly disabled via build config 00:07:53.536 ipsec: explicitly disabled via build config 00:07:53.536 pdcp: explicitly disabled via build config 00:07:53.536 fib: explicitly disabled via build config 00:07:53.536 port: explicitly disabled via build config 00:07:53.536 pdump: explicitly disabled via build config 00:07:53.536 table: explicitly disabled via build config 00:07:53.536 pipeline: explicitly disabled via build config 00:07:53.536 graph: explicitly disabled via build config 00:07:53.536 node: explicitly disabled via build config 00:07:53.536 00:07:53.536 drivers: 00:07:53.536 common/cpt: not in enabled drivers build config 00:07:53.536 common/dpaax: not in enabled drivers build config 00:07:53.536 common/iavf: not in enabled drivers build config 00:07:53.536 common/idpf: not in enabled drivers build config 00:07:53.536 common/ionic: not in enabled drivers build config 00:07:53.536 common/mvep: not in enabled drivers build config 00:07:53.536 common/octeontx: not in enabled drivers build config 00:07:53.536 bus/auxiliary: not in enabled drivers build config 00:07:53.536 bus/cdx: not in enabled drivers build config 00:07:53.536 bus/dpaa: not in enabled drivers build config 00:07:53.536 bus/fslmc: not in enabled drivers build config 00:07:53.536 bus/ifpga: not in enabled drivers build config 00:07:53.536 bus/platform: not in enabled drivers build config 00:07:53.536 bus/uacce: not in enabled drivers build config 00:07:53.536 bus/vmbus: not in enabled drivers build config 00:07:53.536 common/cnxk: not in enabled drivers build config 00:07:53.536 common/mlx5: not in enabled drivers build config 00:07:53.536 common/nfp: not in enabled drivers build config 00:07:53.536 common/nitrox: not in enabled drivers build config 00:07:53.536 common/qat: not in enabled drivers build config 00:07:53.536 common/sfc_efx: not in enabled drivers build config 00:07:53.536 mempool/bucket: not in enabled drivers build config 00:07:53.536 mempool/cnxk: not in enabled drivers build config 00:07:53.536 mempool/dpaa: not in enabled drivers build config 00:07:53.536 mempool/dpaa2: not in enabled drivers build config 00:07:53.536 mempool/octeontx: not in enabled drivers build config 00:07:53.536 mempool/stack: not in enabled drivers build config 00:07:53.536 dma/cnxk: not in enabled drivers build config 00:07:53.536 dma/dpaa: not in enabled drivers build config 00:07:53.536 dma/dpaa2: not in enabled drivers build config 00:07:53.536 dma/hisilicon: not in enabled drivers build config 00:07:53.536 dma/idxd: not in enabled drivers build config 00:07:53.536 dma/ioat: not in enabled drivers build config 00:07:53.536 dma/skeleton: not in enabled drivers build config 00:07:53.536 net/af_packet: not in enabled drivers build config 00:07:53.536 net/af_xdp: not in enabled drivers build config 00:07:53.536 net/ark: not in enabled drivers build config 00:07:53.536 net/atlantic: not in enabled drivers build config 00:07:53.536 net/avp: not in enabled drivers build config 00:07:53.536 net/axgbe: not in enabled drivers build config 00:07:53.536 net/bnx2x: not in enabled drivers build config 00:07:53.536 net/bnxt: not in enabled drivers build config 00:07:53.536 net/bonding: not in enabled drivers build config 00:07:53.536 net/cnxk: not in enabled drivers build config 00:07:53.536 net/cpfl: not in enabled drivers build config 00:07:53.536 net/cxgbe: not in enabled drivers build config 00:07:53.536 net/dpaa: not in enabled drivers build config 00:07:53.536 net/dpaa2: not in enabled drivers build config 00:07:53.536 net/e1000: not in enabled drivers build config 00:07:53.536 net/ena: not in enabled drivers build config 00:07:53.536 net/enetc: not in enabled drivers build config 00:07:53.536 net/enetfec: not in enabled drivers build config 00:07:53.536 net/enic: not in enabled drivers build config 00:07:53.536 net/failsafe: not in enabled drivers build config 00:07:53.536 net/fm10k: not in enabled drivers build config 00:07:53.536 net/gve: not in enabled drivers build config 00:07:53.536 net/hinic: not in enabled drivers build config 00:07:53.536 net/hns3: not in enabled drivers build config 00:07:53.536 net/i40e: not in enabled drivers build config 00:07:53.536 net/iavf: not in enabled drivers build config 00:07:53.536 net/ice: not in enabled drivers build config 00:07:53.536 net/idpf: not in enabled drivers build config 00:07:53.536 net/igc: not in enabled drivers build config 00:07:53.536 net/ionic: not in enabled drivers build config 00:07:53.536 net/ipn3ke: not in enabled drivers build config 00:07:53.536 net/ixgbe: not in enabled drivers build config 00:07:53.536 net/mana: not in enabled drivers build config 00:07:53.536 net/memif: not in enabled drivers build config 00:07:53.536 net/mlx4: not in enabled drivers build config 00:07:53.536 net/mlx5: not in enabled drivers build config 00:07:53.536 net/mvneta: not in enabled drivers build config 00:07:53.536 net/mvpp2: not in enabled drivers build config 00:07:53.536 net/netvsc: not in enabled drivers build config 00:07:53.536 net/nfb: not in enabled drivers build config 00:07:53.536 net/nfp: not in enabled drivers build config 00:07:53.536 net/ngbe: not in enabled drivers build config 00:07:53.536 net/null: not in enabled drivers build config 00:07:53.536 net/octeontx: not in enabled drivers build config 00:07:53.536 net/octeon_ep: not in enabled drivers build config 00:07:53.536 net/pcap: not in enabled drivers build config 00:07:53.536 net/pfe: not in enabled drivers build config 00:07:53.536 net/qede: not in enabled drivers build config 00:07:53.536 net/ring: not in enabled drivers build config 00:07:53.536 net/sfc: not in enabled drivers build config 00:07:53.536 net/softnic: not in enabled drivers build config 00:07:53.536 net/tap: not in enabled drivers build config 00:07:53.536 net/thunderx: not in enabled drivers build config 00:07:53.536 net/txgbe: not in enabled drivers build config 00:07:53.536 net/vdev_netvsc: not in enabled drivers build config 00:07:53.536 net/vhost: not in enabled drivers build config 00:07:53.536 net/virtio: not in enabled drivers build config 00:07:53.536 net/vmxnet3: not in enabled drivers build config 00:07:53.536 raw/*: missing internal dependency, "rawdev" 00:07:53.536 crypto/armv8: not in enabled drivers build config 00:07:53.536 crypto/bcmfs: not in enabled drivers build config 00:07:53.536 crypto/caam_jr: not in enabled drivers build config 00:07:53.536 crypto/ccp: not in enabled drivers build config 00:07:53.537 crypto/cnxk: not in enabled drivers build config 00:07:53.537 crypto/dpaa_sec: not in enabled drivers build config 00:07:53.537 crypto/dpaa2_sec: not in enabled drivers build config 00:07:53.537 crypto/ipsec_mb: not in enabled drivers build config 00:07:53.537 crypto/mlx5: not in enabled drivers build config 00:07:53.537 crypto/mvsam: not in enabled drivers build config 00:07:53.537 crypto/nitrox: not in enabled drivers build config 00:07:53.537 crypto/null: not in enabled drivers build config 00:07:53.537 crypto/octeontx: not in enabled drivers build config 00:07:53.537 crypto/openssl: not in enabled drivers build config 00:07:53.537 crypto/scheduler: not in enabled drivers build config 00:07:53.537 crypto/uadk: not in enabled drivers build config 00:07:53.537 crypto/virtio: not in enabled drivers build config 00:07:53.537 compress/isal: not in enabled drivers build config 00:07:53.537 compress/mlx5: not in enabled drivers build config 00:07:53.537 compress/nitrox: not in enabled drivers build config 00:07:53.537 compress/octeontx: not in enabled drivers build config 00:07:53.537 compress/zlib: not in enabled drivers build config 00:07:53.537 regex/*: missing internal dependency, "regexdev" 00:07:53.537 ml/*: missing internal dependency, "mldev" 00:07:53.537 vdpa/ifc: not in enabled drivers build config 00:07:53.537 vdpa/mlx5: not in enabled drivers build config 00:07:53.537 vdpa/nfp: not in enabled drivers build config 00:07:53.537 vdpa/sfc: not in enabled drivers build config 00:07:53.537 event/*: missing internal dependency, "eventdev" 00:07:53.537 baseband/*: missing internal dependency, "bbdev" 00:07:53.537 gpu/*: missing internal dependency, "gpudev" 00:07:53.537 00:07:53.537 00:07:54.104 Build targets in project: 85 00:07:54.104 00:07:54.104 DPDK 24.03.0 00:07:54.104 00:07:54.104 User defined options 00:07:54.104 buildtype : debug 00:07:54.104 default_library : shared 00:07:54.104 libdir : lib 00:07:54.104 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:54.104 b_sanitize : address 00:07:54.104 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:54.104 c_link_args : 00:07:54.104 cpu_instruction_set: native 00:07:54.104 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:54.104 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:54.104 enable_docs : false 00:07:54.104 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:54.104 enable_kmods : false 00:07:54.104 max_lcores : 128 00:07:54.104 tests : false 00:07:54.104 00:07:54.104 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:55.041 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:55.299 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:55.299 [2/268] Linking static target lib/librte_kvargs.a 00:07:55.299 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:55.299 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:55.299 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:55.299 [6/268] Linking static target lib/librte_log.a 00:07:56.235 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.235 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:56.494 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:56.752 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:56.752 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:56.752 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:56.752 [13/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.752 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:57.011 [15/268] Linking static target lib/librte_telemetry.a 00:07:57.011 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:57.011 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:57.011 [18/268] Linking target lib/librte_log.so.24.1 00:07:57.011 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:57.270 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:57.529 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:57.529 [22/268] Linking target lib/librte_kvargs.so.24.1 00:07:58.096 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:58.096 [24/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:58.353 [25/268] Linking target lib/librte_telemetry.so.24.1 00:07:58.353 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:58.353 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:58.353 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:58.644 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:58.644 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:58.644 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:58.645 [32/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:58.902 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:58.902 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:58.902 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:59.466 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:59.466 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:59.724 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:59.981 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:59.981 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:00.238 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:00.238 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:00.238 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:00.238 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:00.238 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:00.496 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:00.496 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:00.754 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:00.754 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:01.011 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:01.577 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:01.834 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:01.834 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:02.092 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:02.092 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:02.092 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:02.092 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:02.351 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:02.351 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:02.610 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:02.610 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:03.177 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:03.177 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:03.177 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:03.435 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:03.435 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:03.435 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:03.693 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:03.693 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:03.693 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:03.951 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:03.951 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:04.209 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:04.209 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:04.468 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:04.468 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:04.730 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:04.730 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:04.730 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:04.730 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:05.299 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:05.299 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:05.584 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:05.585 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:05.585 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:05.585 [86/268] Linking static target lib/librte_ring.a 00:08:05.585 [87/268] Linking static target lib/librte_eal.a 00:08:05.844 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:06.102 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:06.443 [90/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:06.443 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:06.707 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:06.707 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:06.707 [94/268] Linking static target lib/librte_rcu.a 00:08:06.707 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:06.707 [96/268] Linking static target lib/librte_mempool.a 00:08:06.965 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:06.965 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:06.965 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:07.223 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:07.483 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:07.742 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:07.742 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:08.001 [104/268] Linking static target lib/librte_mbuf.a 00:08:08.001 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:08.001 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:08.260 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:08.260 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:08.260 [109/268] Linking static target lib/librte_net.a 00:08:08.260 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.518 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:08.518 [112/268] Linking static target lib/librte_meter.a 00:08:08.789 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:09.047 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:09.047 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:09.305 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:09.305 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:09.305 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:09.870 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:09.870 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:10.127 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:10.385 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:10.643 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:11.575 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:11.575 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:11.575 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:11.575 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:11.575 [128/268] Linking static target lib/librte_pci.a 00:08:11.833 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:11.833 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:11.833 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:11.833 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:11.833 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:11.833 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:12.116 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:12.116 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:12.116 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:12.116 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:12.116 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:12.373 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:12.373 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:12.373 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:12.373 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:12.373 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:12.937 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:12.937 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:12.937 [147/268] Linking static target lib/librte_cmdline.a 00:08:13.193 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:13.757 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:13.757 [150/268] Linking static target lib/librte_timer.a 00:08:13.757 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:13.757 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:14.014 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:14.271 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:14.529 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:14.529 [156/268] Linking static target lib/librte_ethdev.a 00:08:14.529 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:14.788 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:15.069 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:15.333 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:15.333 [161/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:15.333 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:15.591 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:15.591 [164/268] Linking static target lib/librte_hash.a 00:08:15.591 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:15.850 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:16.109 [167/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:16.109 [168/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:16.109 [169/268] Linking static target lib/librte_compressdev.a 00:08:16.384 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:16.384 [171/268] Linking static target lib/librte_dmadev.a 00:08:16.384 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:16.668 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:16.668 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:17.269 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:17.269 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:17.535 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:17.803 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:17.803 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:17.803 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:17.803 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:18.072 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:18.340 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:18.340 [184/268] Linking static target lib/librte_cryptodev.a 00:08:18.628 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:18.628 [186/268] Linking static target lib/librte_power.a 00:08:18.895 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:18.895 [188/268] Linking static target lib/librte_reorder.a 00:08:19.153 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:19.153 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:19.411 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:19.978 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:20.237 [193/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:20.237 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:20.237 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:20.237 [196/268] Linking static target lib/librte_security.a 00:08:21.627 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:21.627 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:21.627 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:21.627 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:21.627 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:21.886 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:22.151 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:22.151 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:22.733 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:22.997 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:22.997 [207/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:22.997 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:23.260 [209/268] Linking target lib/librte_eal.so.24.1 00:08:23.260 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:23.260 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:23.519 [212/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:23.519 [213/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:23.519 [214/268] Linking target lib/librte_meter.so.24.1 00:08:23.519 [215/268] Linking target lib/librte_ring.so.24.1 00:08:23.519 [216/268] Linking target lib/librte_pci.so.24.1 00:08:23.519 [217/268] Linking target lib/librte_timer.so.24.1 00:08:23.519 [218/268] Linking target lib/librte_dmadev.so.24.1 00:08:23.814 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:23.814 [220/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:23.814 [221/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:23.814 [222/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:23.814 [223/268] Linking static target drivers/librte_bus_vdev.a 00:08:23.814 [224/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:23.814 [225/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:23.814 [226/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:23.814 [227/268] Linking target lib/librte_rcu.so.24.1 00:08:24.101 [228/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:24.101 [229/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:24.101 [230/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:24.101 [231/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:24.101 [232/268] Linking target lib/librte_mempool.so.24.1 00:08:24.384 [233/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:24.384 [234/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:24.384 [235/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:24.384 [236/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:24.384 [237/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:24.384 [238/268] Linking static target drivers/librte_bus_pci.a 00:08:24.384 [239/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:24.384 [240/268] Linking target lib/librte_mbuf.so.24.1 00:08:24.384 [241/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:24.384 [242/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:24.384 [243/268] Linking static target drivers/librte_mempool_ring.a 00:08:24.653 [244/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.653 [245/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:24.653 [246/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:24.653 [247/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:24.653 [248/268] Linking target lib/librte_net.so.24.1 00:08:24.653 [249/268] Linking target lib/librte_cryptodev.so.24.1 00:08:24.912 [250/268] Linking target lib/librte_reorder.so.24.1 00:08:24.912 [251/268] Linking target lib/librte_compressdev.so.24.1 00:08:24.912 [252/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:24.912 [253/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:25.170 [254/268] Linking target lib/librte_cmdline.so.24.1 00:08:25.170 [255/268] Linking target lib/librte_security.so.24.1 00:08:25.170 [256/268] Linking target lib/librte_hash.so.24.1 00:08:25.428 [257/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.428 [258/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:25.428 [259/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:26.802 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.802 [261/268] Linking target lib/librte_ethdev.so.24.1 00:08:27.060 [262/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:27.060 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:27.060 [264/268] Linking target lib/librte_power.so.24.1 00:08:35.191 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:35.191 [266/268] Linking static target lib/librte_vhost.a 00:08:35.757 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:36.014 [268/268] Linking target lib/librte_vhost.so.24.1 00:08:36.014 INFO: autodetecting backend as ninja 00:08:36.014 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:02.555 CC lib/log/log.o 00:09:02.555 CC lib/ut_mock/mock.o 00:09:02.555 CC lib/log/log_flags.o 00:09:02.555 CC lib/log/log_deprecated.o 00:09:02.555 CC lib/ut/ut.o 00:09:02.555 LIB libspdk_ut.a 00:09:02.555 SO libspdk_ut.so.2.0 00:09:02.555 LIB libspdk_ut_mock.a 00:09:02.555 LIB libspdk_log.a 00:09:02.555 SO libspdk_ut_mock.so.6.0 00:09:02.555 SYMLINK libspdk_ut.so 00:09:02.555 SO libspdk_log.so.7.1 00:09:02.555 SYMLINK libspdk_ut_mock.so 00:09:02.555 SYMLINK libspdk_log.so 00:09:02.813 CC lib/ioat/ioat.o 00:09:02.813 CC lib/util/base64.o 00:09:02.813 CC lib/util/bit_array.o 00:09:02.813 CC lib/util/cpuset.o 00:09:02.813 CC lib/util/crc16.o 00:09:02.813 CC lib/util/crc32.o 00:09:02.813 CC lib/dma/dma.o 00:09:02.813 CC lib/util/crc32c.o 00:09:02.813 CXX lib/trace_parser/trace.o 00:09:03.071 CC lib/vfio_user/host/vfio_user_pci.o 00:09:03.071 CC lib/util/crc32_ieee.o 00:09:03.071 CC lib/util/crc64.o 00:09:03.071 CC lib/util/dif.o 00:09:03.071 CC lib/util/fd.o 00:09:03.071 CC lib/util/fd_group.o 00:09:03.071 CC lib/vfio_user/host/vfio_user.o 00:09:03.071 CC lib/util/file.o 00:09:03.329 LIB libspdk_dma.a 00:09:03.329 CC lib/util/hexlify.o 00:09:03.329 CC lib/util/iov.o 00:09:03.329 SO libspdk_dma.so.5.0 00:09:03.329 LIB libspdk_ioat.a 00:09:03.329 SO libspdk_ioat.so.7.0 00:09:03.329 SYMLINK libspdk_dma.so 00:09:03.329 CC lib/util/math.o 00:09:03.329 CC lib/util/net.o 00:09:03.329 CC lib/util/pipe.o 00:09:03.587 SYMLINK libspdk_ioat.so 00:09:03.587 CC lib/util/strerror_tls.o 00:09:03.587 CC lib/util/string.o 00:09:03.587 LIB libspdk_vfio_user.a 00:09:03.587 CC lib/util/uuid.o 00:09:03.587 SO libspdk_vfio_user.so.5.0 00:09:03.587 CC lib/util/xor.o 00:09:03.587 CC lib/util/zipf.o 00:09:03.587 SYMLINK libspdk_vfio_user.so 00:09:03.587 CC lib/util/md5.o 00:09:04.154 LIB libspdk_util.a 00:09:04.413 SO libspdk_util.so.10.1 00:09:04.413 LIB libspdk_trace_parser.a 00:09:04.413 SO libspdk_trace_parser.so.6.0 00:09:04.413 SYMLINK libspdk_util.so 00:09:04.669 SYMLINK libspdk_trace_parser.so 00:09:04.670 CC lib/json/json_parse.o 00:09:04.670 CC lib/json/json_util.o 00:09:04.670 CC lib/json/json_write.o 00:09:04.670 CC lib/env_dpdk/memory.o 00:09:04.670 CC lib/env_dpdk/env.o 00:09:04.670 CC lib/rdma_utils/rdma_utils.o 00:09:04.670 CC lib/conf/conf.o 00:09:04.670 CC lib/env_dpdk/pci.o 00:09:04.670 CC lib/vmd/vmd.o 00:09:04.670 CC lib/idxd/idxd.o 00:09:04.927 CC lib/idxd/idxd_user.o 00:09:05.184 LIB libspdk_rdma_utils.a 00:09:05.184 SO libspdk_rdma_utils.so.1.0 00:09:05.184 LIB libspdk_conf.a 00:09:05.184 SO libspdk_conf.so.6.0 00:09:05.184 LIB libspdk_json.a 00:09:05.184 CC lib/idxd/idxd_kernel.o 00:09:05.184 SYMLINK libspdk_rdma_utils.so 00:09:05.184 CC lib/env_dpdk/init.o 00:09:05.184 SYMLINK libspdk_conf.so 00:09:05.184 SO libspdk_json.so.6.0 00:09:05.184 CC lib/env_dpdk/threads.o 00:09:05.184 SYMLINK libspdk_json.so 00:09:05.184 CC lib/env_dpdk/pci_ioat.o 00:09:05.441 CC lib/vmd/led.o 00:09:05.441 CC lib/env_dpdk/pci_virtio.o 00:09:05.441 CC lib/rdma_provider/common.o 00:09:05.441 CC lib/jsonrpc/jsonrpc_server.o 00:09:05.441 CC lib/env_dpdk/pci_vmd.o 00:09:05.699 CC lib/rdma_provider/rdma_provider_verbs.o 00:09:05.699 CC lib/env_dpdk/pci_idxd.o 00:09:05.699 CC lib/env_dpdk/pci_event.o 00:09:05.699 CC lib/env_dpdk/sigbus_handler.o 00:09:05.699 LIB libspdk_idxd.a 00:09:05.699 CC lib/env_dpdk/pci_dpdk.o 00:09:05.699 SO libspdk_idxd.so.12.1 00:09:05.699 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:05.699 LIB libspdk_vmd.a 00:09:05.699 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:05.956 SYMLINK libspdk_idxd.so 00:09:05.956 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:05.956 CC lib/jsonrpc/jsonrpc_client.o 00:09:05.956 LIB libspdk_rdma_provider.a 00:09:05.956 SO libspdk_vmd.so.6.0 00:09:05.956 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:05.956 SO libspdk_rdma_provider.so.7.0 00:09:05.956 SYMLINK libspdk_vmd.so 00:09:05.956 SYMLINK libspdk_rdma_provider.so 00:09:06.213 LIB libspdk_jsonrpc.a 00:09:06.213 SO libspdk_jsonrpc.so.6.0 00:09:06.213 SYMLINK libspdk_jsonrpc.so 00:09:06.531 CC lib/rpc/rpc.o 00:09:06.812 LIB libspdk_env_dpdk.a 00:09:06.812 LIB libspdk_rpc.a 00:09:07.071 SO libspdk_rpc.so.6.0 00:09:07.071 SO libspdk_env_dpdk.so.15.1 00:09:07.071 SYMLINK libspdk_rpc.so 00:09:07.071 SYMLINK libspdk_env_dpdk.so 00:09:07.330 CC lib/keyring/keyring_rpc.o 00:09:07.330 CC lib/keyring/keyring.o 00:09:07.330 CC lib/trace/trace.o 00:09:07.330 CC lib/trace/trace_flags.o 00:09:07.330 CC lib/trace/trace_rpc.o 00:09:07.330 CC lib/notify/notify.o 00:09:07.330 CC lib/notify/notify_rpc.o 00:09:07.588 LIB libspdk_notify.a 00:09:07.588 SO libspdk_notify.so.6.0 00:09:07.588 SYMLINK libspdk_notify.so 00:09:07.588 LIB libspdk_keyring.a 00:09:07.846 LIB libspdk_trace.a 00:09:07.846 SO libspdk_keyring.so.2.0 00:09:07.846 SO libspdk_trace.so.11.0 00:09:07.846 SYMLINK libspdk_keyring.so 00:09:07.846 SYMLINK libspdk_trace.so 00:09:08.104 CC lib/sock/sock_rpc.o 00:09:08.104 CC lib/sock/sock.o 00:09:08.104 CC lib/thread/thread.o 00:09:08.104 CC lib/thread/iobuf.o 00:09:08.672 LIB libspdk_sock.a 00:09:08.672 SO libspdk_sock.so.10.0 00:09:08.929 SYMLINK libspdk_sock.so 00:09:09.188 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:09.188 CC lib/nvme/nvme_fabric.o 00:09:09.188 CC lib/nvme/nvme_ctrlr.o 00:09:09.188 CC lib/nvme/nvme_ns_cmd.o 00:09:09.188 CC lib/nvme/nvme_ns.o 00:09:09.188 CC lib/nvme/nvme_pcie_common.o 00:09:09.188 CC lib/nvme/nvme_pcie.o 00:09:09.188 CC lib/nvme/nvme_qpair.o 00:09:09.188 CC lib/nvme/nvme.o 00:09:10.564 CC lib/nvme/nvme_quirks.o 00:09:10.564 CC lib/nvme/nvme_transport.o 00:09:10.564 CC lib/nvme/nvme_discovery.o 00:09:10.564 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:10.564 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:10.823 CC lib/nvme/nvme_tcp.o 00:09:11.082 CC lib/nvme/nvme_opal.o 00:09:11.082 CC lib/nvme/nvme_io_msg.o 00:09:11.651 CC lib/nvme/nvme_poll_group.o 00:09:11.651 CC lib/nvme/nvme_zns.o 00:09:11.651 CC lib/nvme/nvme_stubs.o 00:09:11.651 LIB libspdk_thread.a 00:09:11.651 SO libspdk_thread.so.11.0 00:09:11.910 CC lib/nvme/nvme_auth.o 00:09:11.910 SYMLINK libspdk_thread.so 00:09:11.910 CC lib/nvme/nvme_cuse.o 00:09:11.910 CC lib/nvme/nvme_rdma.o 00:09:12.477 CC lib/accel/accel.o 00:09:12.735 CC lib/blob/blobstore.o 00:09:13.018 CC lib/init/json_config.o 00:09:13.296 CC lib/virtio/virtio.o 00:09:13.296 CC lib/fsdev/fsdev.o 00:09:13.564 CC lib/fsdev/fsdev_io.o 00:09:13.564 CC lib/init/subsystem.o 00:09:13.564 CC lib/init/subsystem_rpc.o 00:09:13.824 CC lib/init/rpc.o 00:09:13.824 CC lib/fsdev/fsdev_rpc.o 00:09:13.824 CC lib/virtio/virtio_vhost_user.o 00:09:14.083 CC lib/virtio/virtio_vfio_user.o 00:09:14.083 CC lib/virtio/virtio_pci.o 00:09:14.083 LIB libspdk_init.a 00:09:14.342 CC lib/accel/accel_rpc.o 00:09:14.342 SO libspdk_init.so.6.0 00:09:14.342 CC lib/accel/accel_sw.o 00:09:14.342 SYMLINK libspdk_init.so 00:09:14.600 CC lib/blob/request.o 00:09:14.600 CC lib/blob/zeroes.o 00:09:14.600 CC lib/event/app.o 00:09:14.601 CC lib/event/reactor.o 00:09:14.601 LIB libspdk_fsdev.a 00:09:14.601 LIB libspdk_virtio.a 00:09:14.858 SO libspdk_virtio.so.7.0 00:09:14.858 SO libspdk_fsdev.so.2.0 00:09:14.858 SYMLINK libspdk_fsdev.so 00:09:14.858 CC lib/event/log_rpc.o 00:09:14.858 CC lib/event/app_rpc.o 00:09:14.858 SYMLINK libspdk_virtio.so 00:09:14.858 CC lib/blob/blob_bs_dev.o 00:09:14.858 LIB libspdk_accel.a 00:09:14.858 LIB libspdk_nvme.a 00:09:15.115 SO libspdk_accel.so.16.0 00:09:15.115 CC lib/event/scheduler_static.o 00:09:15.115 SYMLINK libspdk_accel.so 00:09:15.115 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:09:15.372 SO libspdk_nvme.so.15.0 00:09:15.372 CC lib/bdev/bdev.o 00:09:15.372 CC lib/bdev/bdev_rpc.o 00:09:15.372 CC lib/bdev/bdev_zone.o 00:09:15.372 CC lib/bdev/scsi_nvme.o 00:09:15.372 CC lib/bdev/part.o 00:09:15.630 SYMLINK libspdk_nvme.so 00:09:15.630 LIB libspdk_event.a 00:09:15.887 SO libspdk_event.so.14.0 00:09:15.887 SYMLINK libspdk_event.so 00:09:16.452 LIB libspdk_fuse_dispatcher.a 00:09:16.452 SO libspdk_fuse_dispatcher.so.1.0 00:09:16.709 SYMLINK libspdk_fuse_dispatcher.so 00:09:18.643 LIB libspdk_blob.a 00:09:18.643 SO libspdk_blob.so.12.0 00:09:18.900 SYMLINK libspdk_blob.so 00:09:19.158 CC lib/lvol/lvol.o 00:09:19.158 CC lib/blobfs/blobfs.o 00:09:19.158 CC lib/blobfs/tree.o 00:09:19.723 LIB libspdk_bdev.a 00:09:19.723 SO libspdk_bdev.so.17.0 00:09:19.723 SYMLINK libspdk_bdev.so 00:09:19.981 CC lib/nvmf/ctrlr.o 00:09:19.981 CC lib/nvmf/ctrlr_discovery.o 00:09:19.981 CC lib/nvmf/ctrlr_bdev.o 00:09:19.981 CC lib/nvmf/subsystem.o 00:09:19.981 CC lib/ftl/ftl_core.o 00:09:19.981 CC lib/ublk/ublk.o 00:09:19.981 CC lib/nbd/nbd.o 00:09:19.981 CC lib/scsi/dev.o 00:09:20.239 LIB libspdk_blobfs.a 00:09:20.239 SO libspdk_blobfs.so.11.0 00:09:20.497 SYMLINK libspdk_blobfs.so 00:09:20.497 CC lib/nbd/nbd_rpc.o 00:09:20.497 CC lib/scsi/lun.o 00:09:20.756 CC lib/scsi/port.o 00:09:20.756 CC lib/ftl/ftl_init.o 00:09:21.014 CC lib/ftl/ftl_layout.o 00:09:21.014 CC lib/scsi/scsi.o 00:09:21.014 LIB libspdk_nbd.a 00:09:21.014 LIB libspdk_lvol.a 00:09:21.014 SO libspdk_nbd.so.7.0 00:09:21.014 SO libspdk_lvol.so.11.0 00:09:21.014 SYMLINK libspdk_nbd.so 00:09:21.014 CC lib/ftl/ftl_debug.o 00:09:21.014 CC lib/ublk/ublk_rpc.o 00:09:21.272 CC lib/scsi/scsi_bdev.o 00:09:21.272 SYMLINK libspdk_lvol.so 00:09:21.272 CC lib/ftl/ftl_io.o 00:09:21.272 CC lib/scsi/scsi_pr.o 00:09:21.272 CC lib/scsi/scsi_rpc.o 00:09:21.272 CC lib/scsi/task.o 00:09:21.531 CC lib/ftl/ftl_sb.o 00:09:21.531 LIB libspdk_ublk.a 00:09:21.531 CC lib/ftl/ftl_l2p.o 00:09:21.531 CC lib/nvmf/nvmf.o 00:09:21.531 SO libspdk_ublk.so.3.0 00:09:21.531 CC lib/ftl/ftl_l2p_flat.o 00:09:21.790 SYMLINK libspdk_ublk.so 00:09:21.790 CC lib/ftl/ftl_nv_cache.o 00:09:21.790 CC lib/ftl/ftl_band.o 00:09:21.790 CC lib/ftl/ftl_band_ops.o 00:09:21.790 CC lib/ftl/ftl_writer.o 00:09:21.790 CC lib/ftl/ftl_rq.o 00:09:22.048 CC lib/ftl/ftl_reloc.o 00:09:22.049 CC lib/ftl/ftl_l2p_cache.o 00:09:22.049 LIB libspdk_scsi.a 00:09:22.307 SO libspdk_scsi.so.9.0 00:09:22.307 CC lib/nvmf/nvmf_rpc.o 00:09:22.307 CC lib/nvmf/transport.o 00:09:22.575 SYMLINK libspdk_scsi.so 00:09:22.575 CC lib/ftl/ftl_p2l.o 00:09:22.575 CC lib/ftl/ftl_p2l_log.o 00:09:22.575 CC lib/nvmf/tcp.o 00:09:22.844 CC lib/nvmf/stubs.o 00:09:23.102 CC lib/iscsi/conn.o 00:09:23.102 CC lib/iscsi/init_grp.o 00:09:23.361 CC lib/iscsi/iscsi.o 00:09:23.619 CC lib/iscsi/param.o 00:09:23.619 CC lib/nvmf/mdns_server.o 00:09:23.876 CC lib/iscsi/portal_grp.o 00:09:23.876 CC lib/ftl/mngt/ftl_mngt.o 00:09:23.876 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:23.876 CC lib/iscsi/tgt_node.o 00:09:24.134 CC lib/iscsi/iscsi_subsystem.o 00:09:24.393 CC lib/nvmf/rdma.o 00:09:24.393 CC lib/iscsi/iscsi_rpc.o 00:09:24.393 CC lib/vhost/vhost.o 00:09:24.393 CC lib/vhost/vhost_rpc.o 00:09:24.393 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:24.650 CC lib/vhost/vhost_scsi.o 00:09:24.908 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:24.908 CC lib/nvmf/auth.o 00:09:25.166 CC lib/iscsi/task.o 00:09:25.166 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:25.166 CC lib/vhost/vhost_blk.o 00:09:25.424 CC lib/vhost/rte_vhost_user.o 00:09:25.682 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:25.682 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:25.940 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:26.198 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:26.198 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:26.455 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:26.455 LIB libspdk_iscsi.a 00:09:26.455 SO libspdk_iscsi.so.8.0 00:09:26.712 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:26.712 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:26.712 CC lib/ftl/utils/ftl_conf.o 00:09:26.712 CC lib/ftl/utils/ftl_md.o 00:09:26.712 SYMLINK libspdk_iscsi.so 00:09:26.712 CC lib/ftl/utils/ftl_mempool.o 00:09:26.970 CC lib/ftl/utils/ftl_bitmap.o 00:09:26.970 CC lib/ftl/utils/ftl_property.o 00:09:26.970 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:26.970 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:26.970 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:26.970 LIB libspdk_vhost.a 00:09:26.970 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:27.229 SO libspdk_vhost.so.8.0 00:09:27.229 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:27.229 SYMLINK libspdk_vhost.so 00:09:27.229 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:27.229 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:09:27.229 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:27.229 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:27.229 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:27.486 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:27.486 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:09:27.486 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:09:27.486 CC lib/ftl/base/ftl_base_dev.o 00:09:27.486 CC lib/ftl/base/ftl_base_bdev.o 00:09:27.486 CC lib/ftl/ftl_trace.o 00:09:28.053 LIB libspdk_ftl.a 00:09:28.310 SO libspdk_ftl.so.9.0 00:09:28.567 LIB libspdk_nvmf.a 00:09:28.567 SYMLINK libspdk_ftl.so 00:09:28.866 SO libspdk_nvmf.so.20.0 00:09:29.123 SYMLINK libspdk_nvmf.so 00:09:29.379 CC module/env_dpdk/env_dpdk_rpc.o 00:09:29.379 CC module/accel/dsa/accel_dsa.o 00:09:29.379 CC module/blob/bdev/blob_bdev.o 00:09:29.379 CC module/sock/posix/posix.o 00:09:29.636 CC module/accel/iaa/accel_iaa.o 00:09:29.636 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:29.636 CC module/accel/error/accel_error.o 00:09:29.636 CC module/accel/ioat/accel_ioat.o 00:09:29.636 CC module/keyring/file/keyring.o 00:09:29.636 CC module/fsdev/aio/fsdev_aio.o 00:09:29.636 LIB libspdk_env_dpdk_rpc.a 00:09:29.636 SO libspdk_env_dpdk_rpc.so.6.0 00:09:29.636 SYMLINK libspdk_env_dpdk_rpc.so 00:09:29.636 CC module/fsdev/aio/fsdev_aio_rpc.o 00:09:29.636 CC module/keyring/file/keyring_rpc.o 00:09:29.894 CC module/accel/error/accel_error_rpc.o 00:09:29.894 CC module/accel/iaa/accel_iaa_rpc.o 00:09:29.894 CC module/accel/ioat/accel_ioat_rpc.o 00:09:29.894 LIB libspdk_scheduler_dynamic.a 00:09:29.894 LIB libspdk_blob_bdev.a 00:09:29.894 SO libspdk_scheduler_dynamic.so.4.0 00:09:29.894 CC module/accel/dsa/accel_dsa_rpc.o 00:09:29.894 SO libspdk_blob_bdev.so.12.0 00:09:29.894 SYMLINK libspdk_scheduler_dynamic.so 00:09:29.894 LIB libspdk_accel_iaa.a 00:09:29.894 SYMLINK libspdk_blob_bdev.so 00:09:29.894 CC module/fsdev/aio/linux_aio_mgr.o 00:09:29.894 LIB libspdk_accel_error.a 00:09:29.894 LIB libspdk_keyring_file.a 00:09:29.894 LIB libspdk_accel_ioat.a 00:09:29.894 SO libspdk_accel_iaa.so.3.0 00:09:29.894 SO libspdk_accel_error.so.2.0 00:09:29.894 SO libspdk_keyring_file.so.2.0 00:09:29.894 SO libspdk_accel_ioat.so.6.0 00:09:30.152 LIB libspdk_accel_dsa.a 00:09:30.152 SYMLINK libspdk_accel_iaa.so 00:09:30.152 SYMLINK libspdk_keyring_file.so 00:09:30.152 SYMLINK libspdk_accel_error.so 00:09:30.152 SO libspdk_accel_dsa.so.5.0 00:09:30.152 SYMLINK libspdk_accel_ioat.so 00:09:30.152 CC module/keyring/linux/keyring.o 00:09:30.152 CC module/keyring/linux/keyring_rpc.o 00:09:30.152 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:30.152 SYMLINK libspdk_accel_dsa.so 00:09:30.410 CC module/scheduler/gscheduler/gscheduler.o 00:09:30.410 LIB libspdk_scheduler_dpdk_governor.a 00:09:30.410 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:30.410 LIB libspdk_keyring_linux.a 00:09:30.410 SO libspdk_keyring_linux.so.1.0 00:09:30.410 CC module/blobfs/bdev/blobfs_bdev.o 00:09:30.410 CC module/bdev/delay/vbdev_delay.o 00:09:30.410 CC module/bdev/error/vbdev_error.o 00:09:30.410 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:30.410 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:30.410 CC module/bdev/gpt/gpt.o 00:09:30.410 CC module/bdev/lvol/vbdev_lvol.o 00:09:30.410 LIB libspdk_scheduler_gscheduler.a 00:09:30.668 SYMLINK libspdk_keyring_linux.so 00:09:30.668 SO libspdk_scheduler_gscheduler.so.4.0 00:09:30.668 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:30.668 LIB libspdk_fsdev_aio.a 00:09:30.668 SYMLINK libspdk_scheduler_gscheduler.so 00:09:30.668 CC module/bdev/gpt/vbdev_gpt.o 00:09:30.668 LIB libspdk_sock_posix.a 00:09:30.668 SO libspdk_fsdev_aio.so.1.0 00:09:30.668 SO libspdk_sock_posix.so.6.0 00:09:30.668 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:30.925 LIB libspdk_blobfs_bdev.a 00:09:30.925 SO libspdk_blobfs_bdev.so.6.0 00:09:30.925 SYMLINK libspdk_fsdev_aio.so 00:09:30.925 SYMLINK libspdk_sock_posix.so 00:09:30.925 CC module/bdev/error/vbdev_error_rpc.o 00:09:30.925 SYMLINK libspdk_blobfs_bdev.so 00:09:31.183 LIB libspdk_bdev_gpt.a 00:09:31.183 CC module/bdev/malloc/bdev_malloc.o 00:09:31.183 SO libspdk_bdev_gpt.so.6.0 00:09:31.183 CC module/bdev/null/bdev_null.o 00:09:31.183 LIB libspdk_bdev_error.a 00:09:31.183 LIB libspdk_bdev_delay.a 00:09:31.183 CC module/bdev/nvme/bdev_nvme.o 00:09:31.183 CC module/bdev/passthru/vbdev_passthru.o 00:09:31.441 SO libspdk_bdev_delay.so.6.0 00:09:31.441 SO libspdk_bdev_error.so.6.0 00:09:31.441 SYMLINK libspdk_bdev_gpt.so 00:09:31.441 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:31.441 CC module/bdev/raid/bdev_raid.o 00:09:31.441 CC module/bdev/split/vbdev_split.o 00:09:31.441 SYMLINK libspdk_bdev_delay.so 00:09:31.441 SYMLINK libspdk_bdev_error.so 00:09:31.441 CC module/bdev/split/vbdev_split_rpc.o 00:09:31.441 CC module/bdev/nvme/nvme_rpc.o 00:09:31.700 LIB libspdk_bdev_lvol.a 00:09:31.700 SO libspdk_bdev_lvol.so.6.0 00:09:31.700 CC module/bdev/null/bdev_null_rpc.o 00:09:31.700 CC module/bdev/nvme/bdev_mdns_client.o 00:09:31.700 SYMLINK libspdk_bdev_lvol.so 00:09:31.700 CC module/bdev/nvme/vbdev_opal.o 00:09:31.976 LIB libspdk_bdev_split.a 00:09:31.976 SO libspdk_bdev_split.so.6.0 00:09:31.976 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:31.976 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:31.976 CC module/bdev/raid/bdev_raid_rpc.o 00:09:31.976 LIB libspdk_bdev_null.a 00:09:31.976 SYMLINK libspdk_bdev_split.so 00:09:31.976 SO libspdk_bdev_null.so.6.0 00:09:32.235 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:32.235 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:32.235 SYMLINK libspdk_bdev_null.so 00:09:32.235 LIB libspdk_bdev_malloc.a 00:09:32.235 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:32.235 LIB libspdk_bdev_passthru.a 00:09:32.235 SO libspdk_bdev_malloc.so.6.0 00:09:32.235 SO libspdk_bdev_passthru.so.6.0 00:09:32.493 SYMLINK libspdk_bdev_malloc.so 00:09:32.493 CC module/bdev/raid/bdev_raid_sb.o 00:09:32.493 CC module/bdev/xnvme/bdev_xnvme.o 00:09:32.493 SYMLINK libspdk_bdev_passthru.so 00:09:32.493 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:32.493 CC module/bdev/raid/raid0.o 00:09:32.493 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:09:32.752 CC module/bdev/aio/bdev_aio.o 00:09:32.752 CC module/bdev/aio/bdev_aio_rpc.o 00:09:32.752 CC module/bdev/raid/raid1.o 00:09:32.752 CC module/bdev/raid/concat.o 00:09:33.009 LIB libspdk_bdev_zone_block.a 00:09:33.009 SO libspdk_bdev_zone_block.so.6.0 00:09:33.009 LIB libspdk_bdev_xnvme.a 00:09:33.009 SYMLINK libspdk_bdev_zone_block.so 00:09:33.267 SO libspdk_bdev_xnvme.so.3.0 00:09:33.267 CC module/bdev/ftl/bdev_ftl.o 00:09:33.267 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:33.267 SYMLINK libspdk_bdev_xnvme.so 00:09:33.267 CC module/bdev/iscsi/bdev_iscsi.o 00:09:33.267 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:33.267 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:33.267 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:33.267 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:33.525 LIB libspdk_bdev_aio.a 00:09:33.525 LIB libspdk_bdev_raid.a 00:09:33.525 SO libspdk_bdev_aio.so.6.0 00:09:33.525 SO libspdk_bdev_raid.so.6.0 00:09:33.784 SYMLINK libspdk_bdev_aio.so 00:09:33.784 LIB libspdk_bdev_ftl.a 00:09:33.784 SO libspdk_bdev_ftl.so.6.0 00:09:33.784 SYMLINK libspdk_bdev_raid.so 00:09:33.784 SYMLINK libspdk_bdev_ftl.so 00:09:34.041 LIB libspdk_bdev_iscsi.a 00:09:34.041 SO libspdk_bdev_iscsi.so.6.0 00:09:34.299 SYMLINK libspdk_bdev_iscsi.so 00:09:34.299 LIB libspdk_bdev_virtio.a 00:09:34.299 SO libspdk_bdev_virtio.so.6.0 00:09:34.605 SYMLINK libspdk_bdev_virtio.so 00:09:35.997 LIB libspdk_bdev_nvme.a 00:09:36.256 SO libspdk_bdev_nvme.so.7.1 00:09:36.256 SYMLINK libspdk_bdev_nvme.so 00:09:36.822 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:36.822 CC module/event/subsystems/iobuf/iobuf.o 00:09:36.822 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:36.822 CC module/event/subsystems/keyring/keyring.o 00:09:36.822 CC module/event/subsystems/sock/sock.o 00:09:36.822 CC module/event/subsystems/fsdev/fsdev.o 00:09:36.822 CC module/event/subsystems/scheduler/scheduler.o 00:09:36.822 CC module/event/subsystems/vmd/vmd.o 00:09:36.822 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:37.081 LIB libspdk_event_fsdev.a 00:09:37.081 LIB libspdk_event_vhost_blk.a 00:09:37.081 SO libspdk_event_fsdev.so.1.0 00:09:37.081 LIB libspdk_event_keyring.a 00:09:37.081 LIB libspdk_event_vmd.a 00:09:37.081 LIB libspdk_event_sock.a 00:09:37.081 SO libspdk_event_vhost_blk.so.3.0 00:09:37.081 SO libspdk_event_keyring.so.1.0 00:09:37.081 LIB libspdk_event_scheduler.a 00:09:37.081 SO libspdk_event_sock.so.5.0 00:09:37.081 SO libspdk_event_vmd.so.6.0 00:09:37.081 LIB libspdk_event_iobuf.a 00:09:37.081 SYMLINK libspdk_event_fsdev.so 00:09:37.081 SO libspdk_event_scheduler.so.4.0 00:09:37.081 SO libspdk_event_iobuf.so.3.0 00:09:37.081 SYMLINK libspdk_event_vhost_blk.so 00:09:37.081 SYMLINK libspdk_event_sock.so 00:09:37.081 SYMLINK libspdk_event_vmd.so 00:09:37.081 SYMLINK libspdk_event_keyring.so 00:09:37.081 SYMLINK libspdk_event_scheduler.so 00:09:37.081 SYMLINK libspdk_event_iobuf.so 00:09:37.339 CC module/event/subsystems/accel/accel.o 00:09:37.598 LIB libspdk_event_accel.a 00:09:37.598 SO libspdk_event_accel.so.6.0 00:09:37.598 SYMLINK libspdk_event_accel.so 00:09:37.877 CC module/event/subsystems/bdev/bdev.o 00:09:38.135 LIB libspdk_event_bdev.a 00:09:38.135 SO libspdk_event_bdev.so.6.0 00:09:38.393 SYMLINK libspdk_event_bdev.so 00:09:38.393 CC module/event/subsystems/scsi/scsi.o 00:09:38.393 CC module/event/subsystems/nbd/nbd.o 00:09:38.393 CC module/event/subsystems/ublk/ublk.o 00:09:38.393 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:38.393 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:38.649 LIB libspdk_event_ublk.a 00:09:38.649 LIB libspdk_event_scsi.a 00:09:38.649 SO libspdk_event_ublk.so.3.0 00:09:38.649 LIB libspdk_event_nbd.a 00:09:38.649 SO libspdk_event_scsi.so.6.0 00:09:38.649 SO libspdk_event_nbd.so.6.0 00:09:38.649 SYMLINK libspdk_event_ublk.so 00:09:38.649 SYMLINK libspdk_event_nbd.so 00:09:38.649 SYMLINK libspdk_event_scsi.so 00:09:38.906 LIB libspdk_event_nvmf.a 00:09:38.906 SO libspdk_event_nvmf.so.6.0 00:09:38.906 SYMLINK libspdk_event_nvmf.so 00:09:38.906 CC module/event/subsystems/iscsi/iscsi.o 00:09:38.906 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:39.164 LIB libspdk_event_vhost_scsi.a 00:09:39.165 LIB libspdk_event_iscsi.a 00:09:39.165 SO libspdk_event_vhost_scsi.so.3.0 00:09:39.165 SO libspdk_event_iscsi.so.6.0 00:09:39.422 SYMLINK libspdk_event_vhost_scsi.so 00:09:39.422 SYMLINK libspdk_event_iscsi.so 00:09:39.422 SO libspdk.so.6.0 00:09:39.422 SYMLINK libspdk.so 00:09:39.680 CC test/rpc_client/rpc_client_test.o 00:09:39.680 CXX app/trace/trace.o 00:09:39.680 TEST_HEADER include/spdk/accel.h 00:09:39.680 TEST_HEADER include/spdk/accel_module.h 00:09:39.680 TEST_HEADER include/spdk/assert.h 00:09:39.680 CC app/trace_record/trace_record.o 00:09:39.680 TEST_HEADER include/spdk/barrier.h 00:09:39.680 TEST_HEADER include/spdk/base64.h 00:09:39.680 TEST_HEADER include/spdk/bdev.h 00:09:39.680 TEST_HEADER include/spdk/bdev_module.h 00:09:39.680 TEST_HEADER include/spdk/bdev_zone.h 00:09:39.680 TEST_HEADER include/spdk/bit_array.h 00:09:39.680 TEST_HEADER include/spdk/bit_pool.h 00:09:39.680 TEST_HEADER include/spdk/blob_bdev.h 00:09:39.680 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:39.680 TEST_HEADER include/spdk/blobfs.h 00:09:39.680 TEST_HEADER include/spdk/blob.h 00:09:39.680 TEST_HEADER include/spdk/conf.h 00:09:39.680 TEST_HEADER include/spdk/config.h 00:09:39.680 TEST_HEADER include/spdk/cpuset.h 00:09:39.680 TEST_HEADER include/spdk/crc16.h 00:09:39.939 TEST_HEADER include/spdk/crc32.h 00:09:39.939 TEST_HEADER include/spdk/crc64.h 00:09:39.939 TEST_HEADER include/spdk/dif.h 00:09:39.939 TEST_HEADER include/spdk/dma.h 00:09:39.939 TEST_HEADER include/spdk/endian.h 00:09:39.939 TEST_HEADER include/spdk/env_dpdk.h 00:09:39.939 TEST_HEADER include/spdk/env.h 00:09:39.939 TEST_HEADER include/spdk/event.h 00:09:39.939 TEST_HEADER include/spdk/fd_group.h 00:09:39.939 TEST_HEADER include/spdk/fd.h 00:09:39.939 TEST_HEADER include/spdk/file.h 00:09:39.939 TEST_HEADER include/spdk/fsdev.h 00:09:39.939 TEST_HEADER include/spdk/fsdev_module.h 00:09:39.939 TEST_HEADER include/spdk/ftl.h 00:09:39.939 TEST_HEADER include/spdk/fuse_dispatcher.h 00:09:39.939 TEST_HEADER include/spdk/gpt_spec.h 00:09:39.939 TEST_HEADER include/spdk/hexlify.h 00:09:39.939 TEST_HEADER include/spdk/histogram_data.h 00:09:39.939 TEST_HEADER include/spdk/idxd.h 00:09:39.939 TEST_HEADER include/spdk/idxd_spec.h 00:09:39.939 CC test/thread/poller_perf/poller_perf.o 00:09:39.939 TEST_HEADER include/spdk/init.h 00:09:39.939 TEST_HEADER include/spdk/ioat.h 00:09:39.939 TEST_HEADER include/spdk/ioat_spec.h 00:09:39.939 TEST_HEADER include/spdk/iscsi_spec.h 00:09:39.939 TEST_HEADER include/spdk/json.h 00:09:39.939 CC examples/ioat/perf/perf.o 00:09:39.939 TEST_HEADER include/spdk/jsonrpc.h 00:09:39.939 TEST_HEADER include/spdk/keyring.h 00:09:39.939 TEST_HEADER include/spdk/keyring_module.h 00:09:39.939 TEST_HEADER include/spdk/likely.h 00:09:39.939 CC examples/util/zipf/zipf.o 00:09:39.939 TEST_HEADER include/spdk/log.h 00:09:39.939 TEST_HEADER include/spdk/lvol.h 00:09:39.939 CC test/app/bdev_svc/bdev_svc.o 00:09:39.939 TEST_HEADER include/spdk/md5.h 00:09:39.939 TEST_HEADER include/spdk/memory.h 00:09:39.939 TEST_HEADER include/spdk/mmio.h 00:09:39.939 TEST_HEADER include/spdk/nbd.h 00:09:39.939 TEST_HEADER include/spdk/net.h 00:09:39.939 TEST_HEADER include/spdk/notify.h 00:09:39.939 TEST_HEADER include/spdk/nvme.h 00:09:39.939 TEST_HEADER include/spdk/nvme_intel.h 00:09:39.939 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:39.939 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:39.939 TEST_HEADER include/spdk/nvme_spec.h 00:09:39.939 TEST_HEADER include/spdk/nvme_zns.h 00:09:39.939 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:39.939 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:39.939 TEST_HEADER include/spdk/nvmf.h 00:09:39.939 TEST_HEADER include/spdk/nvmf_spec.h 00:09:39.939 TEST_HEADER include/spdk/nvmf_transport.h 00:09:39.939 TEST_HEADER include/spdk/opal.h 00:09:39.939 TEST_HEADER include/spdk/opal_spec.h 00:09:39.939 TEST_HEADER include/spdk/pci_ids.h 00:09:39.939 TEST_HEADER include/spdk/pipe.h 00:09:39.939 CC test/env/mem_callbacks/mem_callbacks.o 00:09:39.939 TEST_HEADER include/spdk/queue.h 00:09:39.939 TEST_HEADER include/spdk/reduce.h 00:09:39.939 TEST_HEADER include/spdk/rpc.h 00:09:39.939 CC test/dma/test_dma/test_dma.o 00:09:39.939 TEST_HEADER include/spdk/scheduler.h 00:09:39.939 TEST_HEADER include/spdk/scsi.h 00:09:39.939 TEST_HEADER include/spdk/scsi_spec.h 00:09:39.939 TEST_HEADER include/spdk/sock.h 00:09:39.939 TEST_HEADER include/spdk/stdinc.h 00:09:39.939 TEST_HEADER include/spdk/string.h 00:09:39.939 TEST_HEADER include/spdk/thread.h 00:09:39.939 TEST_HEADER include/spdk/trace.h 00:09:39.939 TEST_HEADER include/spdk/trace_parser.h 00:09:39.939 TEST_HEADER include/spdk/tree.h 00:09:39.939 TEST_HEADER include/spdk/ublk.h 00:09:39.939 TEST_HEADER include/spdk/util.h 00:09:39.939 TEST_HEADER include/spdk/uuid.h 00:09:40.261 TEST_HEADER include/spdk/version.h 00:09:40.261 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:40.261 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:40.261 TEST_HEADER include/spdk/vhost.h 00:09:40.261 TEST_HEADER include/spdk/vmd.h 00:09:40.261 TEST_HEADER include/spdk/xor.h 00:09:40.261 TEST_HEADER include/spdk/zipf.h 00:09:40.261 CXX test/cpp_headers/accel.o 00:09:40.261 LINK rpc_client_test 00:09:40.261 LINK poller_perf 00:09:40.261 LINK zipf 00:09:40.261 LINK bdev_svc 00:09:40.261 LINK spdk_trace_record 00:09:40.261 LINK ioat_perf 00:09:40.524 CXX test/cpp_headers/accel_module.o 00:09:40.524 LINK spdk_trace 00:09:40.524 CC app/nvmf_tgt/nvmf_main.o 00:09:40.789 CC examples/ioat/verify/verify.o 00:09:40.789 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:40.789 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:40.789 CXX test/cpp_headers/assert.o 00:09:40.789 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:41.046 CXX test/cpp_headers/barrier.o 00:09:41.046 LINK test_dma 00:09:41.046 LINK nvmf_tgt 00:09:41.046 CC examples/thread/thread/thread_ex.o 00:09:41.046 LINK verify 00:09:41.046 LINK mem_callbacks 00:09:41.046 LINK interrupt_tgt 00:09:41.304 CXX test/cpp_headers/base64.o 00:09:41.304 CC app/iscsi_tgt/iscsi_tgt.o 00:09:41.304 CXX test/cpp_headers/bdev.o 00:09:41.561 CC test/env/vtophys/vtophys.o 00:09:41.561 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:41.561 LINK thread 00:09:41.819 LINK nvme_fuzz 00:09:41.819 LINK iscsi_tgt 00:09:41.819 CC test/event/event_perf/event_perf.o 00:09:41.819 CXX test/cpp_headers/bdev_module.o 00:09:41.819 LINK vtophys 00:09:41.819 CC test/nvme/aer/aer.o 00:09:41.819 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:41.819 CC test/accel/dif/dif.o 00:09:42.077 CXX test/cpp_headers/bdev_zone.o 00:09:42.078 LINK event_perf 00:09:42.335 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:42.335 CC examples/sock/hello_world/hello_sock.o 00:09:42.335 CC test/nvme/reset/reset.o 00:09:42.335 CC app/spdk_tgt/spdk_tgt.o 00:09:42.335 CXX test/cpp_headers/bit_array.o 00:09:42.335 LINK aer 00:09:42.593 CC test/event/reactor/reactor.o 00:09:42.593 LINK env_dpdk_post_init 00:09:42.851 CXX test/cpp_headers/bit_pool.o 00:09:42.851 LINK vhost_fuzz 00:09:42.851 LINK reactor 00:09:42.851 LINK hello_sock 00:09:42.851 CXX test/cpp_headers/blob_bdev.o 00:09:42.851 LINK spdk_tgt 00:09:42.851 LINK reset 00:09:42.851 CC test/env/memory/memory_ut.o 00:09:43.109 CXX test/cpp_headers/blobfs_bdev.o 00:09:43.109 CXX test/cpp_headers/blobfs.o 00:09:43.368 CC test/event/reactor_perf/reactor_perf.o 00:09:43.368 CC test/app/histogram_perf/histogram_perf.o 00:09:43.368 CC examples/vmd/lsvmd/lsvmd.o 00:09:43.627 CC test/nvme/sgl/sgl.o 00:09:43.627 LINK dif 00:09:43.627 CC app/spdk_lspci/spdk_lspci.o 00:09:43.627 LINK reactor_perf 00:09:43.627 CXX test/cpp_headers/blob.o 00:09:43.627 LINK histogram_perf 00:09:43.627 CC examples/vmd/led/led.o 00:09:43.627 LINK lsvmd 00:09:43.886 LINK spdk_lspci 00:09:43.886 CXX test/cpp_headers/conf.o 00:09:44.144 CC test/event/app_repeat/app_repeat.o 00:09:44.144 LINK led 00:09:44.144 CXX test/cpp_headers/config.o 00:09:44.144 CC test/event/scheduler/scheduler.o 00:09:44.144 LINK sgl 00:09:44.144 CC examples/idxd/perf/perf.o 00:09:44.144 CXX test/cpp_headers/cpuset.o 00:09:44.144 CXX test/cpp_headers/crc16.o 00:09:44.402 CC app/spdk_nvme_perf/perf.o 00:09:44.402 LINK app_repeat 00:09:44.402 CC examples/fsdev/hello_world/hello_fsdev.o 00:09:44.402 LINK scheduler 00:09:44.661 LINK iscsi_fuzz 00:09:44.661 CC test/nvme/e2edp/nvme_dp.o 00:09:44.661 CXX test/cpp_headers/crc32.o 00:09:44.919 CC examples/accel/perf/accel_perf.o 00:09:44.919 LINK hello_fsdev 00:09:44.919 LINK idxd_perf 00:09:44.919 CXX test/cpp_headers/crc64.o 00:09:44.919 CC test/app/jsoncat/jsoncat.o 00:09:44.919 CC examples/blob/hello_world/hello_blob.o 00:09:45.177 CC examples/blob/cli/blobcli.o 00:09:45.177 LINK nvme_dp 00:09:45.177 LINK memory_ut 00:09:45.177 LINK jsoncat 00:09:45.177 CXX test/cpp_headers/dif.o 00:09:45.435 CC test/nvme/overhead/overhead.o 00:09:45.435 LINK hello_blob 00:09:45.435 CC examples/nvme/hello_world/hello_world.o 00:09:45.435 CXX test/cpp_headers/dma.o 00:09:45.435 CXX test/cpp_headers/endian.o 00:09:45.693 CC test/app/stub/stub.o 00:09:45.693 CC test/env/pci/pci_ut.o 00:09:45.952 LINK overhead 00:09:45.952 CXX test/cpp_headers/env_dpdk.o 00:09:45.952 CC test/nvme/err_injection/err_injection.o 00:09:45.952 LINK hello_world 00:09:45.952 CC test/nvme/startup/startup.o 00:09:45.952 LINK accel_perf 00:09:45.952 LINK stub 00:09:46.246 LINK blobcli 00:09:46.246 CXX test/cpp_headers/env.o 00:09:46.246 LINK spdk_nvme_perf 00:09:46.246 LINK err_injection 00:09:46.246 CC examples/nvme/reconnect/reconnect.o 00:09:46.246 LINK startup 00:09:46.517 CC test/blobfs/mkfs/mkfs.o 00:09:46.517 CXX test/cpp_headers/event.o 00:09:46.517 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:46.517 CC examples/nvme/arbitration/arbitration.o 00:09:46.517 CC test/nvme/reserve/reserve.o 00:09:46.517 LINK pci_ut 00:09:46.776 CC app/spdk_nvme_identify/identify.o 00:09:46.776 CXX test/cpp_headers/fd_group.o 00:09:46.776 CC app/spdk_nvme_discover/discovery_aer.o 00:09:46.776 LINK mkfs 00:09:47.034 CC examples/bdev/hello_world/hello_bdev.o 00:09:47.034 LINK reserve 00:09:47.034 LINK reconnect 00:09:47.034 CXX test/cpp_headers/fd.o 00:09:47.034 LINK arbitration 00:09:47.034 LINK spdk_nvme_discover 00:09:47.292 CC examples/bdev/bdevperf/bdevperf.o 00:09:47.293 CC test/nvme/simple_copy/simple_copy.o 00:09:47.293 CXX test/cpp_headers/file.o 00:09:47.552 CC app/spdk_top/spdk_top.o 00:09:47.552 LINK hello_bdev 00:09:47.552 LINK nvme_manage 00:09:47.552 CC app/vhost/vhost.o 00:09:47.810 CXX test/cpp_headers/fsdev.o 00:09:47.810 CC test/lvol/esnap/esnap.o 00:09:47.810 CC test/bdev/bdevio/bdevio.o 00:09:47.810 LINK simple_copy 00:09:48.068 CC examples/nvme/hotplug/hotplug.o 00:09:48.068 CXX test/cpp_headers/fsdev_module.o 00:09:48.068 LINK vhost 00:09:48.068 CC app/spdk_dd/spdk_dd.o 00:09:48.326 CC test/nvme/connect_stress/connect_stress.o 00:09:48.326 CXX test/cpp_headers/ftl.o 00:09:48.584 LINK hotplug 00:09:48.584 CXX test/cpp_headers/fuse_dispatcher.o 00:09:48.584 LINK bdevio 00:09:48.584 LINK spdk_nvme_identify 00:09:48.841 LINK connect_stress 00:09:48.841 CC app/fio/nvme/fio_plugin.o 00:09:48.841 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:48.841 LINK spdk_dd 00:09:49.098 CXX test/cpp_headers/gpt_spec.o 00:09:49.098 CC examples/nvme/abort/abort.o 00:09:49.098 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:49.098 LINK bdevperf 00:09:49.098 CC test/nvme/boot_partition/boot_partition.o 00:09:49.356 LINK cmb_copy 00:09:49.356 CXX test/cpp_headers/hexlify.o 00:09:49.356 CC test/nvme/compliance/nvme_compliance.o 00:09:49.356 LINK pmr_persistence 00:09:49.613 LINK boot_partition 00:09:49.613 CXX test/cpp_headers/histogram_data.o 00:09:49.613 LINK spdk_top 00:09:49.613 CXX test/cpp_headers/idxd.o 00:09:49.871 CC test/nvme/fused_ordering/fused_ordering.o 00:09:49.871 LINK spdk_nvme 00:09:49.871 CC app/fio/bdev/fio_plugin.o 00:09:49.871 LINK abort 00:09:50.130 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:50.130 CC test/nvme/fdp/fdp.o 00:09:50.130 CXX test/cpp_headers/idxd_spec.o 00:09:50.130 CC test/nvme/cuse/cuse.o 00:09:50.130 CXX test/cpp_headers/init.o 00:09:50.130 LINK nvme_compliance 00:09:50.130 LINK fused_ordering 00:09:50.389 CXX test/cpp_headers/ioat.o 00:09:50.389 CXX test/cpp_headers/ioat_spec.o 00:09:50.389 LINK doorbell_aers 00:09:50.389 CXX test/cpp_headers/iscsi_spec.o 00:09:50.389 CXX test/cpp_headers/json.o 00:09:50.647 CXX test/cpp_headers/jsonrpc.o 00:09:50.647 CXX test/cpp_headers/keyring.o 00:09:50.647 CXX test/cpp_headers/keyring_module.o 00:09:50.647 CXX test/cpp_headers/likely.o 00:09:50.647 CC examples/nvmf/nvmf/nvmf.o 00:09:50.906 LINK fdp 00:09:50.906 CXX test/cpp_headers/log.o 00:09:50.906 LINK spdk_bdev 00:09:50.906 CXX test/cpp_headers/lvol.o 00:09:50.906 CXX test/cpp_headers/md5.o 00:09:50.906 CXX test/cpp_headers/memory.o 00:09:50.906 CXX test/cpp_headers/mmio.o 00:09:51.164 CXX test/cpp_headers/nbd.o 00:09:51.164 CXX test/cpp_headers/net.o 00:09:51.164 CXX test/cpp_headers/notify.o 00:09:51.164 CXX test/cpp_headers/nvme.o 00:09:51.164 CXX test/cpp_headers/nvme_intel.o 00:09:51.164 CXX test/cpp_headers/nvme_ocssd.o 00:09:51.164 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:51.422 LINK nvmf 00:09:51.422 CXX test/cpp_headers/nvme_spec.o 00:09:51.422 CXX test/cpp_headers/nvme_zns.o 00:09:51.422 CXX test/cpp_headers/nvmf_cmd.o 00:09:51.422 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:51.422 CXX test/cpp_headers/nvmf.o 00:09:51.685 CXX test/cpp_headers/nvmf_spec.o 00:09:51.685 CXX test/cpp_headers/nvmf_transport.o 00:09:51.685 CXX test/cpp_headers/opal.o 00:09:51.685 CXX test/cpp_headers/opal_spec.o 00:09:51.944 CXX test/cpp_headers/pci_ids.o 00:09:51.944 CXX test/cpp_headers/pipe.o 00:09:51.944 CXX test/cpp_headers/queue.o 00:09:51.944 CXX test/cpp_headers/reduce.o 00:09:51.944 CXX test/cpp_headers/rpc.o 00:09:51.944 CXX test/cpp_headers/scheduler.o 00:09:52.202 CXX test/cpp_headers/scsi.o 00:09:52.202 CXX test/cpp_headers/scsi_spec.o 00:09:52.202 CXX test/cpp_headers/sock.o 00:09:52.202 CXX test/cpp_headers/stdinc.o 00:09:52.202 CXX test/cpp_headers/string.o 00:09:52.202 CXX test/cpp_headers/thread.o 00:09:52.513 CXX test/cpp_headers/trace.o 00:09:52.513 CXX test/cpp_headers/trace_parser.o 00:09:52.513 CXX test/cpp_headers/tree.o 00:09:52.513 CXX test/cpp_headers/ublk.o 00:09:52.513 CXX test/cpp_headers/util.o 00:09:52.513 CXX test/cpp_headers/uuid.o 00:09:52.513 CXX test/cpp_headers/version.o 00:09:52.513 CXX test/cpp_headers/vfio_user_pci.o 00:09:52.819 CXX test/cpp_headers/vfio_user_spec.o 00:09:52.820 CXX test/cpp_headers/vhost.o 00:09:52.820 CXX test/cpp_headers/vmd.o 00:09:52.820 CXX test/cpp_headers/xor.o 00:09:52.820 CXX test/cpp_headers/zipf.o 00:09:53.078 LINK cuse 00:09:58.342 LINK esnap 00:09:58.601 00:09:58.601 real 2m26.702s 00:09:58.601 user 14m34.100s 00:09:58.601 sys 2m19.790s 00:09:58.601 13:04:05 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:09:58.601 13:04:05 make -- common/autotest_common.sh@10 -- $ set +x 00:09:58.601 ************************************ 00:09:58.601 END TEST make 00:09:58.601 ************************************ 00:09:58.601 13:04:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:58.601 13:04:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:58.601 13:04:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:58.601 13:04:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.601 13:04:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:09:58.601 13:04:05 -- pm/common@44 -- $ pid=5335 00:09:58.601 13:04:05 -- pm/common@50 -- $ kill -TERM 5335 00:09:58.601 13:04:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:58.601 13:04:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:09:58.860 13:04:05 -- pm/common@44 -- $ pid=5336 00:09:58.860 13:04:05 -- pm/common@50 -- $ kill -TERM 5336 00:09:58.860 13:04:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:09:58.860 13:04:05 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:58.860 13:04:05 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.860 13:04:05 -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.860 13:04:05 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.860 13:04:05 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.860 13:04:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.860 13:04:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.860 13:04:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.860 13:04:05 -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.860 13:04:05 -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.860 13:04:05 -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.860 13:04:05 -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.860 13:04:05 -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.860 13:04:05 -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.860 13:04:05 -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.860 13:04:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.860 13:04:05 -- scripts/common.sh@344 -- # case "$op" in 00:09:58.860 13:04:05 -- scripts/common.sh@345 -- # : 1 00:09:58.860 13:04:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.860 13:04:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.860 13:04:05 -- scripts/common.sh@365 -- # decimal 1 00:09:58.860 13:04:05 -- scripts/common.sh@353 -- # local d=1 00:09:58.860 13:04:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.860 13:04:05 -- scripts/common.sh@355 -- # echo 1 00:09:58.860 13:04:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.860 13:04:05 -- scripts/common.sh@366 -- # decimal 2 00:09:58.860 13:04:05 -- scripts/common.sh@353 -- # local d=2 00:09:58.860 13:04:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.860 13:04:05 -- scripts/common.sh@355 -- # echo 2 00:09:58.860 13:04:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.860 13:04:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.860 13:04:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.860 13:04:05 -- scripts/common.sh@368 -- # return 0 00:09:58.860 13:04:05 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.860 13:04:05 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.860 --rc genhtml_branch_coverage=1 00:09:58.860 --rc genhtml_function_coverage=1 00:09:58.860 --rc genhtml_legend=1 00:09:58.860 --rc geninfo_all_blocks=1 00:09:58.860 --rc geninfo_unexecuted_blocks=1 00:09:58.860 00:09:58.860 ' 00:09:58.860 13:04:05 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.860 --rc genhtml_branch_coverage=1 00:09:58.860 --rc genhtml_function_coverage=1 00:09:58.860 --rc genhtml_legend=1 00:09:58.860 --rc geninfo_all_blocks=1 00:09:58.860 --rc geninfo_unexecuted_blocks=1 00:09:58.860 00:09:58.860 ' 00:09:58.860 13:04:05 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.860 --rc genhtml_branch_coverage=1 00:09:58.860 --rc genhtml_function_coverage=1 00:09:58.860 --rc genhtml_legend=1 00:09:58.860 --rc geninfo_all_blocks=1 00:09:58.860 --rc geninfo_unexecuted_blocks=1 00:09:58.860 00:09:58.860 ' 00:09:58.860 13:04:05 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.860 --rc genhtml_branch_coverage=1 00:09:58.860 --rc genhtml_function_coverage=1 00:09:58.860 --rc genhtml_legend=1 00:09:58.860 --rc geninfo_all_blocks=1 00:09:58.860 --rc geninfo_unexecuted_blocks=1 00:09:58.860 00:09:58.860 ' 00:09:58.860 13:04:05 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:58.860 13:04:05 -- nvmf/common.sh@7 -- # uname -s 00:09:58.860 13:04:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.860 13:04:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.860 13:04:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.860 13:04:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.860 13:04:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.860 13:04:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.860 13:04:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.860 13:04:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.861 13:04:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.861 13:04:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.861 13:04:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e8b9e76b-c82e-4bbd-825d-5339581b2dd8 00:09:58.861 13:04:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=e8b9e76b-c82e-4bbd-825d-5339581b2dd8 00:09:58.861 13:04:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.861 13:04:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.861 13:04:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:58.861 13:04:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.861 13:04:05 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.861 13:04:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.861 13:04:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.861 13:04:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.861 13:04:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.861 13:04:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.861 13:04:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.861 13:04:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.861 13:04:05 -- paths/export.sh@5 -- # export PATH 00:09:58.861 13:04:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.861 13:04:05 -- nvmf/common.sh@51 -- # : 0 00:09:58.861 13:04:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.861 13:04:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.861 13:04:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.861 13:04:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.861 13:04:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.861 13:04:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.861 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.861 13:04:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.861 13:04:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.861 13:04:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.861 13:04:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:58.861 13:04:05 -- spdk/autotest.sh@32 -- # uname -s 00:09:58.861 13:04:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:58.861 13:04:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:58.861 13:04:05 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:58.861 13:04:05 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:09:58.861 13:04:05 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:58.861 13:04:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:59.119 13:04:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:59.119 13:04:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:59.119 13:04:05 -- spdk/autotest.sh@48 -- # udevadm_pid=55386 00:09:59.119 13:04:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:59.119 13:04:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:59.119 13:04:05 -- pm/common@17 -- # local monitor 00:09:59.119 13:04:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:59.119 13:04:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:59.119 13:04:05 -- pm/common@25 -- # sleep 1 00:09:59.119 13:04:05 -- pm/common@21 -- # date +%s 00:09:59.119 13:04:05 -- pm/common@21 -- # date +%s 00:09:59.119 13:04:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490245 00:09:59.119 13:04:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490245 00:09:59.119 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490245_collect-cpu-load.pm.log 00:09:59.119 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490245_collect-vmstat.pm.log 00:10:00.053 13:04:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:00.053 13:04:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:00.054 13:04:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.054 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:10:00.054 13:04:06 -- spdk/autotest.sh@59 -- # create_test_list 00:10:00.054 13:04:06 -- common/autotest_common.sh@752 -- # xtrace_disable 00:10:00.054 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:10:00.054 13:04:06 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:00.054 13:04:06 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:00.054 13:04:06 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:00.054 13:04:06 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:00.054 13:04:06 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:00.054 13:04:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:00.054 13:04:06 -- common/autotest_common.sh@1457 -- # uname 00:10:00.054 13:04:06 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:10:00.054 13:04:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:00.054 13:04:06 -- common/autotest_common.sh@1477 -- # uname 00:10:00.054 13:04:06 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:10:00.054 13:04:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:10:00.054 13:04:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:10:00.312 lcov: LCOV version 1.15 00:10:00.312 13:04:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:18.393 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:18.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:40.409 13:04:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:10:40.409 13:04:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.409 13:04:43 -- common/autotest_common.sh@10 -- # set +x 00:10:40.409 13:04:43 -- spdk/autotest.sh@78 -- # rm -f 00:10:40.409 13:04:43 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:40.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:40.409 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:40.409 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:40.409 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:10:40.409 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:10:40.409 13:04:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:10:40.409 13:04:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:40.409 13:04:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:40.409 13:04:44 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:40.409 13:04:44 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:40.409 13:04:44 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:40.409 13:04:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:40.409 13:04:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:10:40.409 13:04:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:40.409 13:04:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:40.409 13:04:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:40.409 13:04:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:40.409 13:04:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:10:40.409 13:04:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:40.409 13:04:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:10:40.409 13:04:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:40.409 13:04:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:40.409 13:04:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:10:40.409 13:04:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:40.409 13:04:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:10:40.409 13:04:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:40.409 13:04:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:40.409 13:04:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:10:40.409 13:04:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:10:40.409 13:04:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:40.409 13:04:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:10:40.409 13:04:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:10:40.409 13:04:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:40.409 13:04:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:10:40.409 13:04:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:40.409 13:04:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:10:40.409 13:04:44 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:10:40.409 13:04:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:40.409 13:04:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:40.409 13:04:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:10:40.409 13:04:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:40.409 13:04:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:40.409 13:04:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:10:40.409 13:04:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:10:40.409 13:04:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:40.409 No valid GPT data, bailing 00:10:40.409 13:04:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:40.409 13:04:44 -- scripts/common.sh@394 -- # pt= 00:10:40.409 13:04:44 -- scripts/common.sh@395 -- # return 1 00:10:40.409 13:04:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:40.409 1+0 records in 00:10:40.409 1+0 records out 00:10:40.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114171 s, 91.8 MB/s 00:10:40.409 13:04:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:40.409 13:04:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:40.409 13:04:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:10:40.409 13:04:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:10:40.409 13:04:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:40.409 No valid GPT data, bailing 00:10:40.409 13:04:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:40.409 13:04:45 -- scripts/common.sh@394 -- # pt= 00:10:40.409 13:04:45 -- scripts/common.sh@395 -- # return 1 00:10:40.409 13:04:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:40.409 1+0 records in 00:10:40.409 1+0 records out 00:10:40.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385079 s, 272 MB/s 00:10:40.409 13:04:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:40.409 13:04:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:40.410 13:04:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:10:40.410 13:04:45 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:10:40.410 13:04:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:10:40.410 No valid GPT data, bailing 00:10:40.410 13:04:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:10:40.410 13:04:45 -- scripts/common.sh@394 -- # pt= 00:10:40.410 13:04:45 -- scripts/common.sh@395 -- # return 1 00:10:40.410 13:04:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:10:40.410 1+0 records in 00:10:40.410 1+0 records out 00:10:40.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00370389 s, 283 MB/s 00:10:40.410 13:04:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:40.410 13:04:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:40.410 13:04:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:10:40.410 13:04:45 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:10:40.410 13:04:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:10:40.410 No valid GPT data, bailing 00:10:40.410 13:04:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:10:40.410 13:04:45 -- scripts/common.sh@394 -- # pt= 00:10:40.410 13:04:45 -- scripts/common.sh@395 -- # return 1 00:10:40.410 13:04:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:10:40.410 1+0 records in 00:10:40.410 1+0 records out 00:10:40.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431876 s, 243 MB/s 00:10:40.410 13:04:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:40.410 13:04:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:40.410 13:04:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:10:40.410 13:04:45 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:10:40.410 13:04:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:10:40.410 No valid GPT data, bailing 00:10:40.410 13:04:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:10:40.410 13:04:45 -- scripts/common.sh@394 -- # pt= 00:10:40.410 13:04:45 -- scripts/common.sh@395 -- # return 1 00:10:40.410 13:04:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:10:40.410 1+0 records in 00:10:40.410 1+0 records out 00:10:40.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00417203 s, 251 MB/s 00:10:40.410 13:04:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:40.410 13:04:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:40.410 13:04:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:10:40.410 13:04:45 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:10:40.410 13:04:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:10:40.410 No valid GPT data, bailing 00:10:40.410 13:04:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:10:40.410 13:04:45 -- scripts/common.sh@394 -- # pt= 00:10:40.410 13:04:45 -- scripts/common.sh@395 -- # return 1 00:10:40.410 13:04:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:10:40.410 1+0 records in 00:10:40.410 1+0 records out 00:10:40.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432068 s, 243 MB/s 00:10:40.410 13:04:45 -- spdk/autotest.sh@105 -- # sync 00:10:40.410 13:04:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:40.410 13:04:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:40.410 13:04:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:40.984 13:04:47 -- spdk/autotest.sh@111 -- # uname -s 00:10:40.984 13:04:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:10:40.984 13:04:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:10:40.984 13:04:47 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:41.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:41.807 Hugepages 00:10:41.807 node hugesize free / total 00:10:41.807 node0 1048576kB 0 / 0 00:10:41.807 node0 2048kB 0 / 0 00:10:41.807 00:10:41.807 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:41.807 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:42.064 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:42.064 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:10:42.064 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:10:42.323 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:10:42.323 13:04:48 -- spdk/autotest.sh@117 -- # uname -s 00:10:42.323 13:04:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:10:42.323 13:04:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:10:42.323 13:04:48 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:42.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.478 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.478 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.478 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.478 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.478 13:04:49 -- common/autotest_common.sh@1517 -- # sleep 1 00:10:44.410 13:04:50 -- common/autotest_common.sh@1518 -- # bdfs=() 00:10:44.410 13:04:50 -- common/autotest_common.sh@1518 -- # local bdfs 00:10:44.410 13:04:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:44.410 13:04:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:44.410 13:04:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:44.410 13:04:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:44.410 13:04:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:44.410 13:04:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:44.410 13:04:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:44.410 13:04:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:44.410 13:04:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:44.410 13:04:50 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:44.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:44.973 Waiting for block devices as requested 00:10:45.229 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:45.229 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:45.229 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:45.487 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.772 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:50.772 13:04:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:50.772 13:04:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:50.772 13:04:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:50.772 13:04:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:10:50.772 13:04:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:10:50.772 13:04:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:50.772 13:04:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:50.772 13:04:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:50.772 13:04:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1543 -- # continue 00:10:50.772 13:04:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:50.772 13:04:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:50.772 13:04:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:50.772 13:04:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:10:50.772 13:04:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:10:50.772 13:04:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:50.772 13:04:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:50.772 13:04:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:50.772 13:04:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1543 -- # continue 00:10:50.772 13:04:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:50.772 13:04:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:50.772 13:04:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:50.772 13:04:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:50.772 13:04:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1543 -- # continue 00:10:50.772 13:04:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:50.772 13:04:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:10:50.772 13:04:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:10:50.772 13:04:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:10:50.772 13:04:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:10:50.772 13:04:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:10:50.772 13:04:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:50.772 13:04:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:50.772 13:04:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:50.772 13:04:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:50.772 13:04:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:50.772 13:04:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:50.772 13:04:56 -- common/autotest_common.sh@1543 -- # continue 00:10:50.772 13:04:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:10:50.773 13:04:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.773 13:04:56 -- common/autotest_common.sh@10 -- # set +x 00:10:50.773 13:04:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:10:50.773 13:04:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.773 13:04:57 -- common/autotest_common.sh@10 -- # set +x 00:10:50.773 13:04:57 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:51.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.596 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.596 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.596 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.596 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.854 13:04:58 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:51.854 13:04:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.854 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:10:51.854 13:04:58 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:51.854 13:04:58 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:10:51.854 13:04:58 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:10:51.854 13:04:58 -- common/autotest_common.sh@1563 -- # bdfs=() 00:10:51.854 13:04:58 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:10:51.854 13:04:58 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:10:51.854 13:04:58 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:10:51.854 13:04:58 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:10:51.854 13:04:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:51.854 13:04:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:51.854 13:04:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:51.854 13:04:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:51.854 13:04:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:51.854 13:04:58 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:51.854 13:04:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:51.854 13:04:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:51.854 13:04:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:10:51.854 13:04:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:51.854 13:04:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:51.854 13:04:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:51.854 13:04:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:10:51.854 13:04:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:51.854 13:04:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:51.854 13:04:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:51.854 13:04:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:10:51.854 13:04:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:51.854 13:04:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:51.854 13:04:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:51.854 13:04:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:10:51.854 13:04:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:51.854 13:04:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:51.854 13:04:58 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:10:51.854 13:04:58 -- common/autotest_common.sh@1572 -- # return 0 00:10:51.854 13:04:58 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:10:51.854 13:04:58 -- common/autotest_common.sh@1580 -- # return 0 00:10:51.854 13:04:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:51.854 13:04:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:51.854 13:04:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:51.854 13:04:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:51.854 13:04:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:51.854 13:04:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.854 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:10:51.854 13:04:58 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:51.854 13:04:58 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:51.854 13:04:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.854 13:04:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.854 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:10:51.854 ************************************ 00:10:51.854 START TEST env 00:10:51.854 ************************************ 00:10:51.854 13:04:58 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:52.130 * Looking for test storage... 00:10:52.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.130 13:04:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.130 13:04:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.130 13:04:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.130 13:04:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.130 13:04:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.130 13:04:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.130 13:04:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.130 13:04:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.130 13:04:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.130 13:04:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.130 13:04:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.130 13:04:58 env -- scripts/common.sh@344 -- # case "$op" in 00:10:52.130 13:04:58 env -- scripts/common.sh@345 -- # : 1 00:10:52.130 13:04:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.130 13:04:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.130 13:04:58 env -- scripts/common.sh@365 -- # decimal 1 00:10:52.130 13:04:58 env -- scripts/common.sh@353 -- # local d=1 00:10:52.130 13:04:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.130 13:04:58 env -- scripts/common.sh@355 -- # echo 1 00:10:52.130 13:04:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.130 13:04:58 env -- scripts/common.sh@366 -- # decimal 2 00:10:52.130 13:04:58 env -- scripts/common.sh@353 -- # local d=2 00:10:52.130 13:04:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.130 13:04:58 env -- scripts/common.sh@355 -- # echo 2 00:10:52.130 13:04:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.130 13:04:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.130 13:04:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.130 13:04:58 env -- scripts/common.sh@368 -- # return 0 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.130 --rc genhtml_branch_coverage=1 00:10:52.130 --rc genhtml_function_coverage=1 00:10:52.130 --rc genhtml_legend=1 00:10:52.130 --rc geninfo_all_blocks=1 00:10:52.130 --rc geninfo_unexecuted_blocks=1 00:10:52.130 00:10:52.130 ' 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.130 --rc genhtml_branch_coverage=1 00:10:52.130 --rc genhtml_function_coverage=1 00:10:52.130 --rc genhtml_legend=1 00:10:52.130 --rc geninfo_all_blocks=1 00:10:52.130 --rc geninfo_unexecuted_blocks=1 00:10:52.130 00:10:52.130 ' 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.130 --rc genhtml_branch_coverage=1 00:10:52.130 --rc genhtml_function_coverage=1 00:10:52.130 --rc genhtml_legend=1 00:10:52.130 --rc geninfo_all_blocks=1 00:10:52.130 --rc geninfo_unexecuted_blocks=1 00:10:52.130 00:10:52.130 ' 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.130 --rc genhtml_branch_coverage=1 00:10:52.130 --rc genhtml_function_coverage=1 00:10:52.130 --rc genhtml_legend=1 00:10:52.130 --rc geninfo_all_blocks=1 00:10:52.130 --rc geninfo_unexecuted_blocks=1 00:10:52.130 00:10:52.130 ' 00:10:52.130 13:04:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:52.130 13:04:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.130 13:04:58 env -- common/autotest_common.sh@10 -- # set +x 00:10:52.130 ************************************ 00:10:52.130 START TEST env_memory 00:10:52.130 ************************************ 00:10:52.130 13:04:58 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:52.130 00:10:52.130 00:10:52.130 CUnit - A unit testing framework for C - Version 2.1-3 00:10:52.130 http://cunit.sourceforge.net/ 00:10:52.130 00:10:52.130 00:10:52.130 Suite: memory 00:10:52.130 Test: alloc and free memory map ...[2024-12-06 13:04:58.623433] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:52.387 passed 00:10:52.387 Test: mem map translation ...[2024-12-06 13:04:58.685214] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:52.387 [2024-12-06 13:04:58.685307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:52.387 [2024-12-06 13:04:58.685407] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:52.387 [2024-12-06 13:04:58.685441] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:52.387 passed 00:10:52.387 Test: mem map registration ...[2024-12-06 13:04:58.805233] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:52.387 [2024-12-06 13:04:58.805383] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:52.387 passed 00:10:52.646 Test: mem map adjacent registrations ...passed 00:10:52.646 00:10:52.646 Run Summary: Type Total Ran Passed Failed Inactive 00:10:52.646 suites 1 1 n/a 0 0 00:10:52.646 tests 4 4 4 0 0 00:10:52.646 asserts 152 152 152 0 n/a 00:10:52.646 00:10:52.646 Elapsed time = 0.360 seconds 00:10:52.646 00:10:52.646 real 0m0.399s 00:10:52.646 user 0m0.372s 00:10:52.646 sys 0m0.019s 00:10:52.646 13:04:58 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.646 13:04:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:52.646 ************************************ 00:10:52.646 END TEST env_memory 00:10:52.646 ************************************ 00:10:52.646 13:04:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:52.646 13:04:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:52.646 13:04:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.646 13:04:58 env -- common/autotest_common.sh@10 -- # set +x 00:10:52.646 ************************************ 00:10:52.646 START TEST env_vtophys 00:10:52.646 ************************************ 00:10:52.646 13:04:58 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:52.646 EAL: lib.eal log level changed from notice to debug 00:10:52.646 EAL: Detected lcore 0 as core 0 on socket 0 00:10:52.646 EAL: Detected lcore 1 as core 0 on socket 0 00:10:52.646 EAL: Detected lcore 2 as core 0 on socket 0 00:10:52.646 EAL: Detected lcore 3 as core 0 on socket 0 00:10:52.646 EAL: Detected lcore 4 as core 0 on socket 0 00:10:52.646 EAL: Detected lcore 5 as core 0 on socket 0 00:10:52.646 EAL: Detected lcore 6 as core 0 on socket 0 00:10:52.646 EAL: Detected lcore 7 as core 0 on socket 0 00:10:52.646 EAL: Detected lcore 8 as core 0 on socket 0 00:10:52.646 EAL: Detected lcore 9 as core 0 on socket 0 00:10:52.646 EAL: Maximum logical cores by configuration: 128 00:10:52.646 EAL: Detected CPU lcores: 10 00:10:52.646 EAL: Detected NUMA nodes: 1 00:10:52.646 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:52.646 EAL: Detected shared linkage of DPDK 00:10:52.646 EAL: No shared files mode enabled, IPC will be disabled 00:10:52.646 EAL: Selected IOVA mode 'PA' 00:10:52.646 EAL: Probing VFIO support... 00:10:52.646 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:52.646 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:52.646 EAL: Ask a virtual area of 0x2e000 bytes 00:10:52.646 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:52.646 EAL: Setting up physically contiguous memory... 00:10:52.646 EAL: Setting maximum number of open files to 524288 00:10:52.646 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:52.646 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:52.646 EAL: Ask a virtual area of 0x61000 bytes 00:10:52.646 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:52.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:52.646 EAL: Ask a virtual area of 0x400000000 bytes 00:10:52.646 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:52.646 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:52.646 EAL: Ask a virtual area of 0x61000 bytes 00:10:52.646 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:52.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:52.646 EAL: Ask a virtual area of 0x400000000 bytes 00:10:52.646 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:52.646 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:52.646 EAL: Ask a virtual area of 0x61000 bytes 00:10:52.646 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:52.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:52.646 EAL: Ask a virtual area of 0x400000000 bytes 00:10:52.646 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:52.646 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:52.646 EAL: Ask a virtual area of 0x61000 bytes 00:10:52.646 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:52.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:52.646 EAL: Ask a virtual area of 0x400000000 bytes 00:10:52.646 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:52.646 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:52.646 EAL: Hugepages will be freed exactly as allocated. 00:10:52.646 EAL: No shared files mode enabled, IPC is disabled 00:10:52.646 EAL: No shared files mode enabled, IPC is disabled 00:10:52.904 EAL: TSC frequency is ~2200000 KHz 00:10:52.904 EAL: Main lcore 0 is ready (tid=7f968a4b8a40;cpuset=[0]) 00:10:52.904 EAL: Trying to obtain current memory policy. 00:10:52.904 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:52.904 EAL: Restoring previous memory policy: 0 00:10:52.904 EAL: request: mp_malloc_sync 00:10:52.904 EAL: No shared files mode enabled, IPC is disabled 00:10:52.904 EAL: Heap on socket 0 was expanded by 2MB 00:10:52.904 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:52.904 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:52.904 EAL: Mem event callback 'spdk:(nil)' registered 00:10:52.904 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:52.904 00:10:52.904 00:10:52.904 CUnit - A unit testing framework for C - Version 2.1-3 00:10:52.904 http://cunit.sourceforge.net/ 00:10:52.904 00:10:52.904 00:10:52.904 Suite: components_suite 00:10:53.162 Test: vtophys_malloc_test ...passed 00:10:53.162 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:53.162 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:53.162 EAL: Restoring previous memory policy: 4 00:10:53.162 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.162 EAL: request: mp_malloc_sync 00:10:53.162 EAL: No shared files mode enabled, IPC is disabled 00:10:53.162 EAL: Heap on socket 0 was expanded by 4MB 00:10:53.162 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.162 EAL: request: mp_malloc_sync 00:10:53.162 EAL: No shared files mode enabled, IPC is disabled 00:10:53.162 EAL: Heap on socket 0 was shrunk by 4MB 00:10:53.162 EAL: Trying to obtain current memory policy. 00:10:53.162 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:53.162 EAL: Restoring previous memory policy: 4 00:10:53.162 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.162 EAL: request: mp_malloc_sync 00:10:53.162 EAL: No shared files mode enabled, IPC is disabled 00:10:53.162 EAL: Heap on socket 0 was expanded by 6MB 00:10:53.162 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.162 EAL: request: mp_malloc_sync 00:10:53.162 EAL: No shared files mode enabled, IPC is disabled 00:10:53.162 EAL: Heap on socket 0 was shrunk by 6MB 00:10:53.162 EAL: Trying to obtain current memory policy. 00:10:53.162 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:53.162 EAL: Restoring previous memory policy: 4 00:10:53.162 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.162 EAL: request: mp_malloc_sync 00:10:53.162 EAL: No shared files mode enabled, IPC is disabled 00:10:53.162 EAL: Heap on socket 0 was expanded by 10MB 00:10:53.162 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.162 EAL: request: mp_malloc_sync 00:10:53.162 EAL: No shared files mode enabled, IPC is disabled 00:10:53.162 EAL: Heap on socket 0 was shrunk by 10MB 00:10:53.162 EAL: Trying to obtain current memory policy. 00:10:53.162 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:53.162 EAL: Restoring previous memory policy: 4 00:10:53.162 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.162 EAL: request: mp_malloc_sync 00:10:53.162 EAL: No shared files mode enabled, IPC is disabled 00:10:53.162 EAL: Heap on socket 0 was expanded by 18MB 00:10:53.162 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.162 EAL: request: mp_malloc_sync 00:10:53.162 EAL: No shared files mode enabled, IPC is disabled 00:10:53.162 EAL: Heap on socket 0 was shrunk by 18MB 00:10:53.419 EAL: Trying to obtain current memory policy. 00:10:53.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:53.419 EAL: Restoring previous memory policy: 4 00:10:53.419 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.419 EAL: request: mp_malloc_sync 00:10:53.419 EAL: No shared files mode enabled, IPC is disabled 00:10:53.419 EAL: Heap on socket 0 was expanded by 34MB 00:10:53.419 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.419 EAL: request: mp_malloc_sync 00:10:53.419 EAL: No shared files mode enabled, IPC is disabled 00:10:53.419 EAL: Heap on socket 0 was shrunk by 34MB 00:10:53.419 EAL: Trying to obtain current memory policy. 00:10:53.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:53.419 EAL: Restoring previous memory policy: 4 00:10:53.419 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.419 EAL: request: mp_malloc_sync 00:10:53.419 EAL: No shared files mode enabled, IPC is disabled 00:10:53.419 EAL: Heap on socket 0 was expanded by 66MB 00:10:53.419 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.419 EAL: request: mp_malloc_sync 00:10:53.419 EAL: No shared files mode enabled, IPC is disabled 00:10:53.419 EAL: Heap on socket 0 was shrunk by 66MB 00:10:53.677 EAL: Trying to obtain current memory policy. 00:10:53.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:53.677 EAL: Restoring previous memory policy: 4 00:10:53.677 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.677 EAL: request: mp_malloc_sync 00:10:53.677 EAL: No shared files mode enabled, IPC is disabled 00:10:53.677 EAL: Heap on socket 0 was expanded by 130MB 00:10:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:10:53.935 EAL: request: mp_malloc_sync 00:10:53.935 EAL: No shared files mode enabled, IPC is disabled 00:10:53.935 EAL: Heap on socket 0 was shrunk by 130MB 00:10:53.935 EAL: Trying to obtain current memory policy. 00:10:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:54.194 EAL: Restoring previous memory policy: 4 00:10:54.194 EAL: Calling mem event callback 'spdk:(nil)' 00:10:54.194 EAL: request: mp_malloc_sync 00:10:54.194 EAL: No shared files mode enabled, IPC is disabled 00:10:54.194 EAL: Heap on socket 0 was expanded by 258MB 00:10:54.451 EAL: Calling mem event callback 'spdk:(nil)' 00:10:54.451 EAL: request: mp_malloc_sync 00:10:54.451 EAL: No shared files mode enabled, IPC is disabled 00:10:54.451 EAL: Heap on socket 0 was shrunk by 258MB 00:10:55.017 EAL: Trying to obtain current memory policy. 00:10:55.017 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:55.017 EAL: Restoring previous memory policy: 4 00:10:55.017 EAL: Calling mem event callback 'spdk:(nil)' 00:10:55.017 EAL: request: mp_malloc_sync 00:10:55.017 EAL: No shared files mode enabled, IPC is disabled 00:10:55.017 EAL: Heap on socket 0 was expanded by 514MB 00:10:55.952 EAL: Calling mem event callback 'spdk:(nil)' 00:10:55.952 EAL: request: mp_malloc_sync 00:10:55.952 EAL: No shared files mode enabled, IPC is disabled 00:10:55.952 EAL: Heap on socket 0 was shrunk by 514MB 00:10:56.518 EAL: Trying to obtain current memory policy. 00:10:56.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:56.776 EAL: Restoring previous memory policy: 4 00:10:56.776 EAL: Calling mem event callback 'spdk:(nil)' 00:10:56.776 EAL: request: mp_malloc_sync 00:10:56.776 EAL: No shared files mode enabled, IPC is disabled 00:10:56.776 EAL: Heap on socket 0 was expanded by 1026MB 00:10:58.152 EAL: Calling mem event callback 'spdk:(nil)' 00:10:58.410 EAL: request: mp_malloc_sync 00:10:58.410 EAL: No shared files mode enabled, IPC is disabled 00:10:58.410 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:59.782 passed 00:10:59.782 00:10:59.782 Run Summary: Type Total Ran Passed Failed Inactive 00:10:59.782 suites 1 1 n/a 0 0 00:10:59.782 tests 2 2 2 0 0 00:10:59.782 asserts 5649 5649 5649 0 n/a 00:10:59.782 00:10:59.782 Elapsed time = 6.900 seconds 00:10:59.782 EAL: Calling mem event callback 'spdk:(nil)' 00:10:59.782 EAL: request: mp_malloc_sync 00:10:59.782 EAL: No shared files mode enabled, IPC is disabled 00:10:59.782 EAL: Heap on socket 0 was shrunk by 2MB 00:10:59.782 EAL: No shared files mode enabled, IPC is disabled 00:10:59.782 EAL: No shared files mode enabled, IPC is disabled 00:10:59.782 EAL: No shared files mode enabled, IPC is disabled 00:10:59.782 00:10:59.782 real 0m7.236s 00:10:59.782 user 0m6.379s 00:10:59.782 sys 0m0.675s 00:10:59.782 13:05:06 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.782 ************************************ 00:10:59.782 13:05:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:59.782 END TEST env_vtophys 00:10:59.782 ************************************ 00:10:59.782 13:05:06 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:59.783 13:05:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.783 13:05:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.783 13:05:06 env -- common/autotest_common.sh@10 -- # set +x 00:10:59.783 ************************************ 00:10:59.783 START TEST env_pci 00:10:59.783 ************************************ 00:10:59.783 13:05:06 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:00.041 00:11:00.041 00:11:00.041 CUnit - A unit testing framework for C - Version 2.1-3 00:11:00.041 http://cunit.sourceforge.net/ 00:11:00.041 00:11:00.041 00:11:00.041 Suite: pci 00:11:00.041 Test: pci_hook ...[2024-12-06 13:05:06.315899] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58261 has claimed it 00:11:00.041 passed 00:11:00.041 00:11:00.041 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.041 suites 1 1 n/a 0 0 00:11:00.041 tests 1 1 1 0 0 00:11:00.041 asserts 25 25 25 0 n/a 00:11:00.041 00:11:00.041 Elapsed time = 0.007 seconds 00:11:00.041 EAL: Cannot find device (10000:00:01.0) 00:11:00.041 EAL: Failed to attach device on primary process 00:11:00.041 00:11:00.041 real 0m0.068s 00:11:00.041 user 0m0.034s 00:11:00.041 sys 0m0.033s 00:11:00.041 13:05:06 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.041 ************************************ 00:11:00.041 END TEST env_pci 00:11:00.041 ************************************ 00:11:00.041 13:05:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:00.041 13:05:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:00.041 13:05:06 env -- env/env.sh@15 -- # uname 00:11:00.041 13:05:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:00.041 13:05:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:00.041 13:05:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:00.041 13:05:06 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:00.041 13:05:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.041 13:05:06 env -- common/autotest_common.sh@10 -- # set +x 00:11:00.041 ************************************ 00:11:00.041 START TEST env_dpdk_post_init 00:11:00.041 ************************************ 00:11:00.041 13:05:06 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:00.041 EAL: Detected CPU lcores: 10 00:11:00.041 EAL: Detected NUMA nodes: 1 00:11:00.041 EAL: Detected shared linkage of DPDK 00:11:00.041 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:00.041 EAL: Selected IOVA mode 'PA' 00:11:00.299 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:00.299 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:00.299 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:00.299 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:11:00.299 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:11:00.299 Starting DPDK initialization... 00:11:00.299 Starting SPDK post initialization... 00:11:00.299 SPDK NVMe probe 00:11:00.299 Attaching to 0000:00:10.0 00:11:00.299 Attaching to 0000:00:11.0 00:11:00.299 Attaching to 0000:00:12.0 00:11:00.299 Attaching to 0000:00:13.0 00:11:00.299 Attached to 0000:00:10.0 00:11:00.299 Attached to 0000:00:11.0 00:11:00.299 Attached to 0000:00:13.0 00:11:00.299 Attached to 0000:00:12.0 00:11:00.299 Cleaning up... 00:11:00.299 00:11:00.299 real 0m0.323s 00:11:00.299 user 0m0.118s 00:11:00.299 sys 0m0.106s 00:11:00.299 13:05:06 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.299 13:05:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:00.299 ************************************ 00:11:00.299 END TEST env_dpdk_post_init 00:11:00.299 ************************************ 00:11:00.299 13:05:06 env -- env/env.sh@26 -- # uname 00:11:00.299 13:05:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:00.299 13:05:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:00.299 13:05:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:00.299 13:05:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.299 13:05:06 env -- common/autotest_common.sh@10 -- # set +x 00:11:00.299 ************************************ 00:11:00.299 START TEST env_mem_callbacks 00:11:00.299 ************************************ 00:11:00.299 13:05:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:00.557 EAL: Detected CPU lcores: 10 00:11:00.557 EAL: Detected NUMA nodes: 1 00:11:00.557 EAL: Detected shared linkage of DPDK 00:11:00.557 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:00.557 EAL: Selected IOVA mode 'PA' 00:11:00.557 00:11:00.557 00:11:00.557 CUnit - A unit testing framework for C - Version 2.1-3 00:11:00.557 http://cunit.sourceforge.net/ 00:11:00.557 00:11:00.557 00:11:00.557 Suite: memory 00:11:00.557 Test: test ... 00:11:00.557 register 0x200000200000 2097152 00:11:00.557 malloc 3145728 00:11:00.557 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:00.557 register 0x200000400000 4194304 00:11:00.557 buf 0x2000004fffc0 len 3145728 PASSED 00:11:00.557 malloc 64 00:11:00.557 buf 0x2000004ffec0 len 64 PASSED 00:11:00.557 malloc 4194304 00:11:00.557 register 0x200000800000 6291456 00:11:00.557 buf 0x2000009fffc0 len 4194304 PASSED 00:11:00.557 free 0x2000004fffc0 3145728 00:11:00.557 free 0x2000004ffec0 64 00:11:00.557 unregister 0x200000400000 4194304 PASSED 00:11:00.557 free 0x2000009fffc0 4194304 00:11:00.557 unregister 0x200000800000 6291456 PASSED 00:11:00.557 malloc 8388608 00:11:00.557 register 0x200000400000 10485760 00:11:00.557 buf 0x2000005fffc0 len 8388608 PASSED 00:11:00.557 free 0x2000005fffc0 8388608 00:11:00.557 unregister 0x200000400000 10485760 PASSED 00:11:00.557 passed 00:11:00.557 00:11:00.557 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.557 suites 1 1 n/a 0 0 00:11:00.557 tests 1 1 1 0 0 00:11:00.557 asserts 15 15 15 0 n/a 00:11:00.557 00:11:00.557 Elapsed time = 0.064 seconds 00:11:00.557 00:11:00.557 real 0m0.281s 00:11:00.557 user 0m0.110s 00:11:00.557 sys 0m0.068s 00:11:00.557 ************************************ 00:11:00.557 END TEST env_mem_callbacks 00:11:00.557 ************************************ 00:11:00.557 13:05:07 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.557 13:05:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:00.815 ************************************ 00:11:00.815 END TEST env 00:11:00.815 ************************************ 00:11:00.815 00:11:00.815 real 0m8.739s 00:11:00.815 user 0m7.211s 00:11:00.815 sys 0m1.121s 00:11:00.815 13:05:07 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.815 13:05:07 env -- common/autotest_common.sh@10 -- # set +x 00:11:00.815 13:05:07 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:00.815 13:05:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:00.815 13:05:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.815 13:05:07 -- common/autotest_common.sh@10 -- # set +x 00:11:00.815 ************************************ 00:11:00.815 START TEST rpc 00:11:00.815 ************************************ 00:11:00.815 13:05:07 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:00.815 * Looking for test storage... 00:11:00.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:00.815 13:05:07 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.815 13:05:07 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.815 13:05:07 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.815 13:05:07 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.815 13:05:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.815 13:05:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.815 13:05:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.815 13:05:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.815 13:05:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.815 13:05:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.815 13:05:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.815 13:05:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.815 13:05:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.815 13:05:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.815 13:05:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.815 13:05:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:00.815 13:05:07 rpc -- scripts/common.sh@345 -- # : 1 00:11:00.815 13:05:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.815 13:05:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.815 13:05:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:11:00.815 13:05:07 rpc -- scripts/common.sh@353 -- # local d=1 00:11:00.815 13:05:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.815 13:05:07 rpc -- scripts/common.sh@355 -- # echo 1 00:11:00.815 13:05:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.073 13:05:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:11:01.073 13:05:07 rpc -- scripts/common.sh@353 -- # local d=2 00:11:01.073 13:05:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.073 13:05:07 rpc -- scripts/common.sh@355 -- # echo 2 00:11:01.073 13:05:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.073 13:05:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.073 13:05:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.073 13:05:07 rpc -- scripts/common.sh@368 -- # return 0 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:01.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.073 --rc genhtml_branch_coverage=1 00:11:01.073 --rc genhtml_function_coverage=1 00:11:01.073 --rc genhtml_legend=1 00:11:01.073 --rc geninfo_all_blocks=1 00:11:01.073 --rc geninfo_unexecuted_blocks=1 00:11:01.073 00:11:01.073 ' 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:01.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.073 --rc genhtml_branch_coverage=1 00:11:01.073 --rc genhtml_function_coverage=1 00:11:01.073 --rc genhtml_legend=1 00:11:01.073 --rc geninfo_all_blocks=1 00:11:01.073 --rc geninfo_unexecuted_blocks=1 00:11:01.073 00:11:01.073 ' 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:01.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.073 --rc genhtml_branch_coverage=1 00:11:01.073 --rc genhtml_function_coverage=1 00:11:01.073 --rc genhtml_legend=1 00:11:01.073 --rc geninfo_all_blocks=1 00:11:01.073 --rc geninfo_unexecuted_blocks=1 00:11:01.073 00:11:01.073 ' 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:01.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.073 --rc genhtml_branch_coverage=1 00:11:01.073 --rc genhtml_function_coverage=1 00:11:01.073 --rc genhtml_legend=1 00:11:01.073 --rc geninfo_all_blocks=1 00:11:01.073 --rc geninfo_unexecuted_blocks=1 00:11:01.073 00:11:01.073 ' 00:11:01.073 13:05:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58389 00:11:01.073 13:05:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:01.073 13:05:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:01.073 13:05:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58389 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@835 -- # '[' -z 58389 ']' 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.073 13:05:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.073 [2024-12-06 13:05:07.473248] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:01.073 [2024-12-06 13:05:07.474105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58389 ] 00:11:01.338 [2024-12-06 13:05:07.656532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.338 [2024-12-06 13:05:07.829092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:01.338 [2024-12-06 13:05:07.829370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58389' to capture a snapshot of events at runtime. 00:11:01.338 [2024-12-06 13:05:07.829516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.338 [2024-12-06 13:05:07.829823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.338 [2024-12-06 13:05:07.830014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58389 for offline analysis/debug. 00:11:01.338 [2024-12-06 13:05:07.831291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.302 13:05:08 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.302 13:05:08 rpc -- common/autotest_common.sh@868 -- # return 0 00:11:02.302 13:05:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:02.302 13:05:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:02.302 13:05:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:02.302 13:05:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:02.302 13:05:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.302 13:05:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.302 13:05:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.302 ************************************ 00:11:02.302 START TEST rpc_integrity 00:11:02.302 ************************************ 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:02.302 { 00:11:02.302 "name": "Malloc0", 00:11:02.302 "aliases": [ 00:11:02.302 "cfedee45-7bcf-49b4-9c3a-74ea1e11de87" 00:11:02.302 ], 00:11:02.302 "product_name": "Malloc disk", 00:11:02.302 "block_size": 512, 00:11:02.302 "num_blocks": 16384, 00:11:02.302 "uuid": "cfedee45-7bcf-49b4-9c3a-74ea1e11de87", 00:11:02.302 "assigned_rate_limits": { 00:11:02.302 "rw_ios_per_sec": 0, 00:11:02.302 "rw_mbytes_per_sec": 0, 00:11:02.302 "r_mbytes_per_sec": 0, 00:11:02.302 "w_mbytes_per_sec": 0 00:11:02.302 }, 00:11:02.302 "claimed": false, 00:11:02.302 "zoned": false, 00:11:02.302 "supported_io_types": { 00:11:02.302 "read": true, 00:11:02.302 "write": true, 00:11:02.302 "unmap": true, 00:11:02.302 "flush": true, 00:11:02.302 "reset": true, 00:11:02.302 "nvme_admin": false, 00:11:02.302 "nvme_io": false, 00:11:02.302 "nvme_io_md": false, 00:11:02.302 "write_zeroes": true, 00:11:02.302 "zcopy": true, 00:11:02.302 "get_zone_info": false, 00:11:02.302 "zone_management": false, 00:11:02.302 "zone_append": false, 00:11:02.302 "compare": false, 00:11:02.302 "compare_and_write": false, 00:11:02.302 "abort": true, 00:11:02.302 "seek_hole": false, 00:11:02.302 "seek_data": false, 00:11:02.302 "copy": true, 00:11:02.302 "nvme_iov_md": false 00:11:02.302 }, 00:11:02.302 "memory_domains": [ 00:11:02.302 { 00:11:02.302 "dma_device_id": "system", 00:11:02.302 "dma_device_type": 1 00:11:02.302 }, 00:11:02.302 { 00:11:02.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.302 "dma_device_type": 2 00:11:02.302 } 00:11:02.302 ], 00:11:02.302 "driver_specific": {} 00:11:02.302 } 00:11:02.302 ]' 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.302 [2024-12-06 13:05:08.786834] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:02.302 [2024-12-06 13:05:08.786940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.302 [2024-12-06 13:05:08.786999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:02.302 [2024-12-06 13:05:08.787025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.302 [2024-12-06 13:05:08.790263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.302 [2024-12-06 13:05:08.790323] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:02.302 Passthru0 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.302 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.302 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.303 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.303 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:02.303 { 00:11:02.303 "name": "Malloc0", 00:11:02.303 "aliases": [ 00:11:02.303 "cfedee45-7bcf-49b4-9c3a-74ea1e11de87" 00:11:02.303 ], 00:11:02.303 "product_name": "Malloc disk", 00:11:02.303 "block_size": 512, 00:11:02.303 "num_blocks": 16384, 00:11:02.303 "uuid": "cfedee45-7bcf-49b4-9c3a-74ea1e11de87", 00:11:02.303 "assigned_rate_limits": { 00:11:02.303 "rw_ios_per_sec": 0, 00:11:02.303 "rw_mbytes_per_sec": 0, 00:11:02.303 "r_mbytes_per_sec": 0, 00:11:02.303 "w_mbytes_per_sec": 0 00:11:02.303 }, 00:11:02.303 "claimed": true, 00:11:02.303 "claim_type": "exclusive_write", 00:11:02.303 "zoned": false, 00:11:02.303 "supported_io_types": { 00:11:02.303 "read": true, 00:11:02.303 "write": true, 00:11:02.303 "unmap": true, 00:11:02.303 "flush": true, 00:11:02.303 "reset": true, 00:11:02.303 "nvme_admin": false, 00:11:02.303 "nvme_io": false, 00:11:02.303 "nvme_io_md": false, 00:11:02.303 "write_zeroes": true, 00:11:02.303 "zcopy": true, 00:11:02.303 "get_zone_info": false, 00:11:02.303 "zone_management": false, 00:11:02.303 "zone_append": false, 00:11:02.303 "compare": false, 00:11:02.303 "compare_and_write": false, 00:11:02.303 "abort": true, 00:11:02.303 "seek_hole": false, 00:11:02.303 "seek_data": false, 00:11:02.303 "copy": true, 00:11:02.303 "nvme_iov_md": false 00:11:02.303 }, 00:11:02.303 "memory_domains": [ 00:11:02.303 { 00:11:02.303 "dma_device_id": "system", 00:11:02.303 "dma_device_type": 1 00:11:02.303 }, 00:11:02.303 { 00:11:02.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.303 "dma_device_type": 2 00:11:02.303 } 00:11:02.303 ], 00:11:02.303 "driver_specific": {} 00:11:02.303 }, 00:11:02.303 { 00:11:02.303 "name": "Passthru0", 00:11:02.303 "aliases": [ 00:11:02.303 "85147721-0089-53d4-94df-833f9b0e6344" 00:11:02.303 ], 00:11:02.303 "product_name": "passthru", 00:11:02.303 "block_size": 512, 00:11:02.303 "num_blocks": 16384, 00:11:02.303 "uuid": "85147721-0089-53d4-94df-833f9b0e6344", 00:11:02.303 "assigned_rate_limits": { 00:11:02.303 "rw_ios_per_sec": 0, 00:11:02.303 "rw_mbytes_per_sec": 0, 00:11:02.303 "r_mbytes_per_sec": 0, 00:11:02.303 "w_mbytes_per_sec": 0 00:11:02.303 }, 00:11:02.303 "claimed": false, 00:11:02.303 "zoned": false, 00:11:02.303 "supported_io_types": { 00:11:02.303 "read": true, 00:11:02.303 "write": true, 00:11:02.303 "unmap": true, 00:11:02.303 "flush": true, 00:11:02.303 "reset": true, 00:11:02.303 "nvme_admin": false, 00:11:02.303 "nvme_io": false, 00:11:02.303 "nvme_io_md": false, 00:11:02.303 "write_zeroes": true, 00:11:02.303 "zcopy": true, 00:11:02.303 "get_zone_info": false, 00:11:02.303 "zone_management": false, 00:11:02.303 "zone_append": false, 00:11:02.303 "compare": false, 00:11:02.303 "compare_and_write": false, 00:11:02.303 "abort": true, 00:11:02.303 "seek_hole": false, 00:11:02.303 "seek_data": false, 00:11:02.303 "copy": true, 00:11:02.303 "nvme_iov_md": false 00:11:02.303 }, 00:11:02.303 "memory_domains": [ 00:11:02.303 { 00:11:02.303 "dma_device_id": "system", 00:11:02.303 "dma_device_type": 1 00:11:02.303 }, 00:11:02.303 { 00:11:02.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.303 "dma_device_type": 2 00:11:02.303 } 00:11:02.303 ], 00:11:02.303 "driver_specific": { 00:11:02.303 "passthru": { 00:11:02.303 "name": "Passthru0", 00:11:02.303 "base_bdev_name": "Malloc0" 00:11:02.303 } 00:11:02.303 } 00:11:02.303 } 00:11:02.303 ]' 00:11:02.303 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:02.560 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:02.560 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.560 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.560 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.560 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:02.560 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:02.560 ************************************ 00:11:02.560 END TEST rpc_integrity 00:11:02.560 ************************************ 00:11:02.560 13:05:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:02.560 00:11:02.560 real 0m0.341s 00:11:02.560 user 0m0.214s 00:11:02.560 sys 0m0.036s 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.560 13:05:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 13:05:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:02.560 13:05:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.560 13:05:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.560 13:05:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 ************************************ 00:11:02.560 START TEST rpc_plugins 00:11:02.560 ************************************ 00:11:02.560 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:11:02.560 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:02.560 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.560 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.560 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:02.560 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:02.560 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.560 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.560 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:02.560 { 00:11:02.560 "name": "Malloc1", 00:11:02.560 "aliases": [ 00:11:02.560 "9319a0f8-325a-4e1f-accd-557dabcfe6cc" 00:11:02.560 ], 00:11:02.560 "product_name": "Malloc disk", 00:11:02.560 "block_size": 4096, 00:11:02.560 "num_blocks": 256, 00:11:02.560 "uuid": "9319a0f8-325a-4e1f-accd-557dabcfe6cc", 00:11:02.560 "assigned_rate_limits": { 00:11:02.560 "rw_ios_per_sec": 0, 00:11:02.560 "rw_mbytes_per_sec": 0, 00:11:02.560 "r_mbytes_per_sec": 0, 00:11:02.560 "w_mbytes_per_sec": 0 00:11:02.560 }, 00:11:02.560 "claimed": false, 00:11:02.560 "zoned": false, 00:11:02.560 "supported_io_types": { 00:11:02.560 "read": true, 00:11:02.560 "write": true, 00:11:02.560 "unmap": true, 00:11:02.560 "flush": true, 00:11:02.560 "reset": true, 00:11:02.560 "nvme_admin": false, 00:11:02.560 "nvme_io": false, 00:11:02.560 "nvme_io_md": false, 00:11:02.560 "write_zeroes": true, 00:11:02.560 "zcopy": true, 00:11:02.560 "get_zone_info": false, 00:11:02.560 "zone_management": false, 00:11:02.560 "zone_append": false, 00:11:02.561 "compare": false, 00:11:02.561 "compare_and_write": false, 00:11:02.561 "abort": true, 00:11:02.561 "seek_hole": false, 00:11:02.561 "seek_data": false, 00:11:02.561 "copy": true, 00:11:02.561 "nvme_iov_md": false 00:11:02.561 }, 00:11:02.561 "memory_domains": [ 00:11:02.561 { 00:11:02.561 "dma_device_id": "system", 00:11:02.561 "dma_device_type": 1 00:11:02.561 }, 00:11:02.561 { 00:11:02.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.561 "dma_device_type": 2 00:11:02.561 } 00:11:02.561 ], 00:11:02.561 "driver_specific": {} 00:11:02.561 } 00:11:02.561 ]' 00:11:02.561 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:02.818 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:02.818 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:02.818 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.818 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:02.818 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.818 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:02.818 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.818 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:02.818 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.818 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:02.818 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:02.818 ************************************ 00:11:02.818 END TEST rpc_plugins 00:11:02.818 ************************************ 00:11:02.818 13:05:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:02.818 00:11:02.818 real 0m0.168s 00:11:02.818 user 0m0.110s 00:11:02.818 sys 0m0.020s 00:11:02.818 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.818 13:05:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:02.818 13:05:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:02.818 13:05:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.818 13:05:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.818 13:05:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.818 ************************************ 00:11:02.818 START TEST rpc_trace_cmd_test 00:11:02.818 ************************************ 00:11:02.818 13:05:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:11:02.818 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:02.818 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:02.818 13:05:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.818 13:05:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.818 13:05:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.818 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:02.818 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58389", 00:11:02.818 "tpoint_group_mask": "0x8", 00:11:02.818 "iscsi_conn": { 00:11:02.818 "mask": "0x2", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "scsi": { 00:11:02.818 "mask": "0x4", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "bdev": { 00:11:02.818 "mask": "0x8", 00:11:02.818 "tpoint_mask": "0xffffffffffffffff" 00:11:02.818 }, 00:11:02.818 "nvmf_rdma": { 00:11:02.818 "mask": "0x10", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "nvmf_tcp": { 00:11:02.818 "mask": "0x20", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "ftl": { 00:11:02.818 "mask": "0x40", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "blobfs": { 00:11:02.818 "mask": "0x80", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "dsa": { 00:11:02.818 "mask": "0x200", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "thread": { 00:11:02.818 "mask": "0x400", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "nvme_pcie": { 00:11:02.818 "mask": "0x800", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "iaa": { 00:11:02.818 "mask": "0x1000", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "nvme_tcp": { 00:11:02.818 "mask": "0x2000", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "bdev_nvme": { 00:11:02.818 "mask": "0x4000", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "sock": { 00:11:02.818 "mask": "0x8000", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "blob": { 00:11:02.818 "mask": "0x10000", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "bdev_raid": { 00:11:02.818 "mask": "0x20000", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 }, 00:11:02.818 "scheduler": { 00:11:02.818 "mask": "0x40000", 00:11:02.818 "tpoint_mask": "0x0" 00:11:02.818 } 00:11:02.818 }' 00:11:02.818 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:02.818 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:11:02.818 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:03.076 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:03.076 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:03.076 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:03.076 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:03.076 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:03.076 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:03.076 ************************************ 00:11:03.076 END TEST rpc_trace_cmd_test 00:11:03.076 ************************************ 00:11:03.076 13:05:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:03.076 00:11:03.076 real 0m0.242s 00:11:03.076 user 0m0.216s 00:11:03.076 sys 0m0.019s 00:11:03.076 13:05:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.076 13:05:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.076 13:05:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:03.076 13:05:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:03.076 13:05:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:03.076 13:05:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:03.076 13:05:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.076 13:05:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.076 ************************************ 00:11:03.076 START TEST rpc_daemon_integrity 00:11:03.076 ************************************ 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.076 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:03.334 { 00:11:03.334 "name": "Malloc2", 00:11:03.334 "aliases": [ 00:11:03.334 "a7279a4b-bdd9-4351-b013-8fb4c7864795" 00:11:03.334 ], 00:11:03.334 "product_name": "Malloc disk", 00:11:03.334 "block_size": 512, 00:11:03.334 "num_blocks": 16384, 00:11:03.334 "uuid": "a7279a4b-bdd9-4351-b013-8fb4c7864795", 00:11:03.334 "assigned_rate_limits": { 00:11:03.334 "rw_ios_per_sec": 0, 00:11:03.334 "rw_mbytes_per_sec": 0, 00:11:03.334 "r_mbytes_per_sec": 0, 00:11:03.334 "w_mbytes_per_sec": 0 00:11:03.334 }, 00:11:03.334 "claimed": false, 00:11:03.334 "zoned": false, 00:11:03.334 "supported_io_types": { 00:11:03.334 "read": true, 00:11:03.334 "write": true, 00:11:03.334 "unmap": true, 00:11:03.334 "flush": true, 00:11:03.334 "reset": true, 00:11:03.334 "nvme_admin": false, 00:11:03.334 "nvme_io": false, 00:11:03.334 "nvme_io_md": false, 00:11:03.334 "write_zeroes": true, 00:11:03.334 "zcopy": true, 00:11:03.334 "get_zone_info": false, 00:11:03.334 "zone_management": false, 00:11:03.334 "zone_append": false, 00:11:03.334 "compare": false, 00:11:03.334 "compare_and_write": false, 00:11:03.334 "abort": true, 00:11:03.334 "seek_hole": false, 00:11:03.334 "seek_data": false, 00:11:03.334 "copy": true, 00:11:03.334 "nvme_iov_md": false 00:11:03.334 }, 00:11:03.334 "memory_domains": [ 00:11:03.334 { 00:11:03.334 "dma_device_id": "system", 00:11:03.334 "dma_device_type": 1 00:11:03.334 }, 00:11:03.334 { 00:11:03.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.334 "dma_device_type": 2 00:11:03.334 } 00:11:03.334 ], 00:11:03.334 "driver_specific": {} 00:11:03.334 } 00:11:03.334 ]' 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.334 [2024-12-06 13:05:09.670189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:03.334 [2024-12-06 13:05:09.670412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.334 [2024-12-06 13:05:09.670457] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:03.334 [2024-12-06 13:05:09.670476] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.334 [2024-12-06 13:05:09.673263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.334 [2024-12-06 13:05:09.673318] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:03.334 Passthru0 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.334 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:03.334 { 00:11:03.334 "name": "Malloc2", 00:11:03.334 "aliases": [ 00:11:03.334 "a7279a4b-bdd9-4351-b013-8fb4c7864795" 00:11:03.334 ], 00:11:03.334 "product_name": "Malloc disk", 00:11:03.334 "block_size": 512, 00:11:03.334 "num_blocks": 16384, 00:11:03.334 "uuid": "a7279a4b-bdd9-4351-b013-8fb4c7864795", 00:11:03.334 "assigned_rate_limits": { 00:11:03.334 "rw_ios_per_sec": 0, 00:11:03.334 "rw_mbytes_per_sec": 0, 00:11:03.334 "r_mbytes_per_sec": 0, 00:11:03.334 "w_mbytes_per_sec": 0 00:11:03.335 }, 00:11:03.335 "claimed": true, 00:11:03.335 "claim_type": "exclusive_write", 00:11:03.335 "zoned": false, 00:11:03.335 "supported_io_types": { 00:11:03.335 "read": true, 00:11:03.335 "write": true, 00:11:03.335 "unmap": true, 00:11:03.335 "flush": true, 00:11:03.335 "reset": true, 00:11:03.335 "nvme_admin": false, 00:11:03.335 "nvme_io": false, 00:11:03.335 "nvme_io_md": false, 00:11:03.335 "write_zeroes": true, 00:11:03.335 "zcopy": true, 00:11:03.335 "get_zone_info": false, 00:11:03.335 "zone_management": false, 00:11:03.335 "zone_append": false, 00:11:03.335 "compare": false, 00:11:03.335 "compare_and_write": false, 00:11:03.335 "abort": true, 00:11:03.335 "seek_hole": false, 00:11:03.335 "seek_data": false, 00:11:03.335 "copy": true, 00:11:03.335 "nvme_iov_md": false 00:11:03.335 }, 00:11:03.335 "memory_domains": [ 00:11:03.335 { 00:11:03.335 "dma_device_id": "system", 00:11:03.335 "dma_device_type": 1 00:11:03.335 }, 00:11:03.335 { 00:11:03.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.335 "dma_device_type": 2 00:11:03.335 } 00:11:03.335 ], 00:11:03.335 "driver_specific": {} 00:11:03.335 }, 00:11:03.335 { 00:11:03.335 "name": "Passthru0", 00:11:03.335 "aliases": [ 00:11:03.335 "8e0ec464-72fa-5d6e-98a3-0413e47434c0" 00:11:03.335 ], 00:11:03.335 "product_name": "passthru", 00:11:03.335 "block_size": 512, 00:11:03.335 "num_blocks": 16384, 00:11:03.335 "uuid": "8e0ec464-72fa-5d6e-98a3-0413e47434c0", 00:11:03.335 "assigned_rate_limits": { 00:11:03.335 "rw_ios_per_sec": 0, 00:11:03.335 "rw_mbytes_per_sec": 0, 00:11:03.335 "r_mbytes_per_sec": 0, 00:11:03.335 "w_mbytes_per_sec": 0 00:11:03.335 }, 00:11:03.335 "claimed": false, 00:11:03.335 "zoned": false, 00:11:03.335 "supported_io_types": { 00:11:03.335 "read": true, 00:11:03.335 "write": true, 00:11:03.335 "unmap": true, 00:11:03.335 "flush": true, 00:11:03.335 "reset": true, 00:11:03.335 "nvme_admin": false, 00:11:03.335 "nvme_io": false, 00:11:03.335 "nvme_io_md": false, 00:11:03.335 "write_zeroes": true, 00:11:03.335 "zcopy": true, 00:11:03.335 "get_zone_info": false, 00:11:03.335 "zone_management": false, 00:11:03.335 "zone_append": false, 00:11:03.335 "compare": false, 00:11:03.335 "compare_and_write": false, 00:11:03.335 "abort": true, 00:11:03.335 "seek_hole": false, 00:11:03.335 "seek_data": false, 00:11:03.335 "copy": true, 00:11:03.335 "nvme_iov_md": false 00:11:03.335 }, 00:11:03.335 "memory_domains": [ 00:11:03.335 { 00:11:03.335 "dma_device_id": "system", 00:11:03.335 "dma_device_type": 1 00:11:03.335 }, 00:11:03.335 { 00:11:03.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.335 "dma_device_type": 2 00:11:03.335 } 00:11:03.335 ], 00:11:03.335 "driver_specific": { 00:11:03.335 "passthru": { 00:11:03.335 "name": "Passthru0", 00:11:03.335 "base_bdev_name": "Malloc2" 00:11:03.335 } 00:11:03.335 } 00:11:03.335 } 00:11:03.335 ]' 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:03.335 ************************************ 00:11:03.335 END TEST rpc_daemon_integrity 00:11:03.335 ************************************ 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:03.335 00:11:03.335 real 0m0.342s 00:11:03.335 user 0m0.219s 00:11:03.335 sys 0m0.032s 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.335 13:05:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:03.593 13:05:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:03.593 13:05:09 rpc -- rpc/rpc.sh@84 -- # killprocess 58389 00:11:03.593 13:05:09 rpc -- common/autotest_common.sh@954 -- # '[' -z 58389 ']' 00:11:03.593 13:05:09 rpc -- common/autotest_common.sh@958 -- # kill -0 58389 00:11:03.593 13:05:09 rpc -- common/autotest_common.sh@959 -- # uname 00:11:03.593 13:05:09 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.593 13:05:09 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58389 00:11:03.593 killing process with pid 58389 00:11:03.593 13:05:09 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.593 13:05:09 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.593 13:05:09 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58389' 00:11:03.593 13:05:09 rpc -- common/autotest_common.sh@973 -- # kill 58389 00:11:03.593 13:05:09 rpc -- common/autotest_common.sh@978 -- # wait 58389 00:11:05.491 ************************************ 00:11:05.491 END TEST rpc 00:11:05.491 ************************************ 00:11:05.491 00:11:05.491 real 0m4.871s 00:11:05.491 user 0m5.718s 00:11:05.491 sys 0m0.728s 00:11:05.491 13:05:12 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.491 13:05:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.750 13:05:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:05.750 13:05:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.750 13:05:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.750 13:05:12 -- common/autotest_common.sh@10 -- # set +x 00:11:05.750 ************************************ 00:11:05.750 START TEST skip_rpc 00:11:05.751 ************************************ 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:05.751 * Looking for test storage... 00:11:05.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.751 13:05:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:05.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.751 --rc genhtml_branch_coverage=1 00:11:05.751 --rc genhtml_function_coverage=1 00:11:05.751 --rc genhtml_legend=1 00:11:05.751 --rc geninfo_all_blocks=1 00:11:05.751 --rc geninfo_unexecuted_blocks=1 00:11:05.751 00:11:05.751 ' 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:05.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.751 --rc genhtml_branch_coverage=1 00:11:05.751 --rc genhtml_function_coverage=1 00:11:05.751 --rc genhtml_legend=1 00:11:05.751 --rc geninfo_all_blocks=1 00:11:05.751 --rc geninfo_unexecuted_blocks=1 00:11:05.751 00:11:05.751 ' 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:05.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.751 --rc genhtml_branch_coverage=1 00:11:05.751 --rc genhtml_function_coverage=1 00:11:05.751 --rc genhtml_legend=1 00:11:05.751 --rc geninfo_all_blocks=1 00:11:05.751 --rc geninfo_unexecuted_blocks=1 00:11:05.751 00:11:05.751 ' 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:05.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.751 --rc genhtml_branch_coverage=1 00:11:05.751 --rc genhtml_function_coverage=1 00:11:05.751 --rc genhtml_legend=1 00:11:05.751 --rc geninfo_all_blocks=1 00:11:05.751 --rc geninfo_unexecuted_blocks=1 00:11:05.751 00:11:05.751 ' 00:11:05.751 13:05:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:05.751 13:05:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:05.751 13:05:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.751 13:05:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.016 ************************************ 00:11:06.016 START TEST skip_rpc 00:11:06.016 ************************************ 00:11:06.016 13:05:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:11:06.016 13:05:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58613 00:11:06.016 13:05:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:06.016 13:05:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:06.016 13:05:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:06.016 [2024-12-06 13:05:12.444085] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:06.016 [2024-12-06 13:05:12.444266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58613 ] 00:11:06.274 [2024-12-06 13:05:12.621809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.274 [2024-12-06 13:05:12.728209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58613 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58613 ']' 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58613 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58613 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58613' 00:11:11.539 killing process with pid 58613 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58613 00:11:11.539 13:05:17 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58613 00:11:12.913 ************************************ 00:11:12.913 END TEST skip_rpc 00:11:12.913 ************************************ 00:11:12.913 00:11:12.913 real 0m7.138s 00:11:12.913 user 0m6.677s 00:11:12.913 sys 0m0.361s 00:11:12.913 13:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.913 13:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.171 13:05:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:13.171 13:05:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:13.171 13:05:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.171 13:05:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.171 ************************************ 00:11:13.171 START TEST skip_rpc_with_json 00:11:13.171 ************************************ 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:13.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58717 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58717 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58717 ']' 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.171 13:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:13.171 [2024-12-06 13:05:19.594299] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:13.171 [2024-12-06 13:05:19.594532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58717 ] 00:11:13.428 [2024-12-06 13:05:19.781291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.428 [2024-12-06 13:05:19.927327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.360 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.360 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:11:14.360 13:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:14.360 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.360 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:14.360 [2024-12-06 13:05:20.788089] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:14.360 request: 00:11:14.360 { 00:11:14.360 "trtype": "tcp", 00:11:14.360 "method": "nvmf_get_transports", 00:11:14.360 "req_id": 1 00:11:14.360 } 00:11:14.360 Got JSON-RPC error response 00:11:14.360 response: 00:11:14.360 { 00:11:14.360 "code": -19, 00:11:14.360 "message": "No such device" 00:11:14.360 } 00:11:14.360 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:14.361 13:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:14.361 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.361 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:14.361 [2024-12-06 13:05:20.800281] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.361 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.361 13:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:14.361 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.361 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:14.619 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.619 13:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:14.619 { 00:11:14.619 "subsystems": [ 00:11:14.619 { 00:11:14.619 "subsystem": "fsdev", 00:11:14.619 "config": [ 00:11:14.619 { 00:11:14.619 "method": "fsdev_set_opts", 00:11:14.619 "params": { 00:11:14.619 "fsdev_io_pool_size": 65535, 00:11:14.619 "fsdev_io_cache_size": 256 00:11:14.619 } 00:11:14.619 } 00:11:14.619 ] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "keyring", 00:11:14.619 "config": [] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "iobuf", 00:11:14.619 "config": [ 00:11:14.619 { 00:11:14.619 "method": "iobuf_set_options", 00:11:14.619 "params": { 00:11:14.619 "small_pool_count": 8192, 00:11:14.619 "large_pool_count": 1024, 00:11:14.619 "small_bufsize": 8192, 00:11:14.619 "large_bufsize": 135168, 00:11:14.619 "enable_numa": false 00:11:14.619 } 00:11:14.619 } 00:11:14.619 ] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "sock", 00:11:14.619 "config": [ 00:11:14.619 { 00:11:14.619 "method": "sock_set_default_impl", 00:11:14.619 "params": { 00:11:14.619 "impl_name": "posix" 00:11:14.619 } 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "method": "sock_impl_set_options", 00:11:14.619 "params": { 00:11:14.619 "impl_name": "ssl", 00:11:14.619 "recv_buf_size": 4096, 00:11:14.619 "send_buf_size": 4096, 00:11:14.619 "enable_recv_pipe": true, 00:11:14.619 "enable_quickack": false, 00:11:14.619 "enable_placement_id": 0, 00:11:14.619 "enable_zerocopy_send_server": true, 00:11:14.619 "enable_zerocopy_send_client": false, 00:11:14.619 "zerocopy_threshold": 0, 00:11:14.619 "tls_version": 0, 00:11:14.619 "enable_ktls": false 00:11:14.619 } 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "method": "sock_impl_set_options", 00:11:14.619 "params": { 00:11:14.619 "impl_name": "posix", 00:11:14.619 "recv_buf_size": 2097152, 00:11:14.619 "send_buf_size": 2097152, 00:11:14.619 "enable_recv_pipe": true, 00:11:14.619 "enable_quickack": false, 00:11:14.619 "enable_placement_id": 0, 00:11:14.619 "enable_zerocopy_send_server": true, 00:11:14.619 "enable_zerocopy_send_client": false, 00:11:14.619 "zerocopy_threshold": 0, 00:11:14.619 "tls_version": 0, 00:11:14.619 "enable_ktls": false 00:11:14.619 } 00:11:14.619 } 00:11:14.619 ] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "vmd", 00:11:14.619 "config": [] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "accel", 00:11:14.619 "config": [ 00:11:14.619 { 00:11:14.619 "method": "accel_set_options", 00:11:14.619 "params": { 00:11:14.619 "small_cache_size": 128, 00:11:14.619 "large_cache_size": 16, 00:11:14.619 "task_count": 2048, 00:11:14.619 "sequence_count": 2048, 00:11:14.619 "buf_count": 2048 00:11:14.619 } 00:11:14.619 } 00:11:14.619 ] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "bdev", 00:11:14.619 "config": [ 00:11:14.619 { 00:11:14.619 "method": "bdev_set_options", 00:11:14.619 "params": { 00:11:14.619 "bdev_io_pool_size": 65535, 00:11:14.619 "bdev_io_cache_size": 256, 00:11:14.619 "bdev_auto_examine": true, 00:11:14.619 "iobuf_small_cache_size": 128, 00:11:14.619 "iobuf_large_cache_size": 16 00:11:14.619 } 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "method": "bdev_raid_set_options", 00:11:14.619 "params": { 00:11:14.619 "process_window_size_kb": 1024, 00:11:14.619 "process_max_bandwidth_mb_sec": 0 00:11:14.619 } 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "method": "bdev_iscsi_set_options", 00:11:14.619 "params": { 00:11:14.619 "timeout_sec": 30 00:11:14.619 } 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "method": "bdev_nvme_set_options", 00:11:14.619 "params": { 00:11:14.619 "action_on_timeout": "none", 00:11:14.619 "timeout_us": 0, 00:11:14.619 "timeout_admin_us": 0, 00:11:14.619 "keep_alive_timeout_ms": 10000, 00:11:14.619 "arbitration_burst": 0, 00:11:14.619 "low_priority_weight": 0, 00:11:14.619 "medium_priority_weight": 0, 00:11:14.619 "high_priority_weight": 0, 00:11:14.619 "nvme_adminq_poll_period_us": 10000, 00:11:14.619 "nvme_ioq_poll_period_us": 0, 00:11:14.619 "io_queue_requests": 0, 00:11:14.619 "delay_cmd_submit": true, 00:11:14.619 "transport_retry_count": 4, 00:11:14.619 "bdev_retry_count": 3, 00:11:14.619 "transport_ack_timeout": 0, 00:11:14.619 "ctrlr_loss_timeout_sec": 0, 00:11:14.619 "reconnect_delay_sec": 0, 00:11:14.619 "fast_io_fail_timeout_sec": 0, 00:11:14.619 "disable_auto_failback": false, 00:11:14.619 "generate_uuids": false, 00:11:14.619 "transport_tos": 0, 00:11:14.619 "nvme_error_stat": false, 00:11:14.619 "rdma_srq_size": 0, 00:11:14.619 "io_path_stat": false, 00:11:14.619 "allow_accel_sequence": false, 00:11:14.619 "rdma_max_cq_size": 0, 00:11:14.619 "rdma_cm_event_timeout_ms": 0, 00:11:14.619 "dhchap_digests": [ 00:11:14.619 "sha256", 00:11:14.619 "sha384", 00:11:14.619 "sha512" 00:11:14.619 ], 00:11:14.619 "dhchap_dhgroups": [ 00:11:14.619 "null", 00:11:14.619 "ffdhe2048", 00:11:14.619 "ffdhe3072", 00:11:14.619 "ffdhe4096", 00:11:14.619 "ffdhe6144", 00:11:14.619 "ffdhe8192" 00:11:14.619 ] 00:11:14.619 } 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "method": "bdev_nvme_set_hotplug", 00:11:14.619 "params": { 00:11:14.619 "period_us": 100000, 00:11:14.619 "enable": false 00:11:14.619 } 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "method": "bdev_wait_for_examine" 00:11:14.619 } 00:11:14.619 ] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "scsi", 00:11:14.619 "config": null 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "scheduler", 00:11:14.619 "config": [ 00:11:14.619 { 00:11:14.619 "method": "framework_set_scheduler", 00:11:14.619 "params": { 00:11:14.619 "name": "static" 00:11:14.619 } 00:11:14.619 } 00:11:14.619 ] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "vhost_scsi", 00:11:14.619 "config": [] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "vhost_blk", 00:11:14.619 "config": [] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "ublk", 00:11:14.619 "config": [] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "nbd", 00:11:14.619 "config": [] 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "subsystem": "nvmf", 00:11:14.619 "config": [ 00:11:14.619 { 00:11:14.619 "method": "nvmf_set_config", 00:11:14.619 "params": { 00:11:14.619 "discovery_filter": "match_any", 00:11:14.619 "admin_cmd_passthru": { 00:11:14.619 "identify_ctrlr": false 00:11:14.619 }, 00:11:14.619 "dhchap_digests": [ 00:11:14.619 "sha256", 00:11:14.619 "sha384", 00:11:14.619 "sha512" 00:11:14.619 ], 00:11:14.619 "dhchap_dhgroups": [ 00:11:14.619 "null", 00:11:14.619 "ffdhe2048", 00:11:14.619 "ffdhe3072", 00:11:14.619 "ffdhe4096", 00:11:14.619 "ffdhe6144", 00:11:14.619 "ffdhe8192" 00:11:14.619 ] 00:11:14.619 } 00:11:14.619 }, 00:11:14.619 { 00:11:14.619 "method": "nvmf_set_max_subsystems", 00:11:14.619 "params": { 00:11:14.619 "max_subsystems": 1024 00:11:14.619 } 00:11:14.620 }, 00:11:14.620 { 00:11:14.620 "method": "nvmf_set_crdt", 00:11:14.620 "params": { 00:11:14.620 "crdt1": 0, 00:11:14.620 "crdt2": 0, 00:11:14.620 "crdt3": 0 00:11:14.620 } 00:11:14.620 }, 00:11:14.620 { 00:11:14.620 "method": "nvmf_create_transport", 00:11:14.620 "params": { 00:11:14.620 "trtype": "TCP", 00:11:14.620 "max_queue_depth": 128, 00:11:14.620 "max_io_qpairs_per_ctrlr": 127, 00:11:14.620 "in_capsule_data_size": 4096, 00:11:14.620 "max_io_size": 131072, 00:11:14.620 "io_unit_size": 131072, 00:11:14.620 "max_aq_depth": 128, 00:11:14.620 "num_shared_buffers": 511, 00:11:14.620 "buf_cache_size": 4294967295, 00:11:14.620 "dif_insert_or_strip": false, 00:11:14.620 "zcopy": false, 00:11:14.620 "c2h_success": true, 00:11:14.620 "sock_priority": 0, 00:11:14.620 "abort_timeout_sec": 1, 00:11:14.620 "ack_timeout": 0, 00:11:14.620 "data_wr_pool_size": 0 00:11:14.620 } 00:11:14.620 } 00:11:14.620 ] 00:11:14.620 }, 00:11:14.620 { 00:11:14.620 "subsystem": "iscsi", 00:11:14.620 "config": [ 00:11:14.620 { 00:11:14.620 "method": "iscsi_set_options", 00:11:14.620 "params": { 00:11:14.620 "node_base": "iqn.2016-06.io.spdk", 00:11:14.620 "max_sessions": 128, 00:11:14.620 "max_connections_per_session": 2, 00:11:14.620 "max_queue_depth": 64, 00:11:14.620 "default_time2wait": 2, 00:11:14.620 "default_time2retain": 20, 00:11:14.620 "first_burst_length": 8192, 00:11:14.620 "immediate_data": true, 00:11:14.620 "allow_duplicated_isid": false, 00:11:14.620 "error_recovery_level": 0, 00:11:14.620 "nop_timeout": 60, 00:11:14.620 "nop_in_interval": 30, 00:11:14.620 "disable_chap": false, 00:11:14.620 "require_chap": false, 00:11:14.620 "mutual_chap": false, 00:11:14.620 "chap_group": 0, 00:11:14.620 "max_large_datain_per_connection": 64, 00:11:14.620 "max_r2t_per_connection": 4, 00:11:14.620 "pdu_pool_size": 36864, 00:11:14.620 "immediate_data_pool_size": 16384, 00:11:14.620 "data_out_pool_size": 2048 00:11:14.620 } 00:11:14.620 } 00:11:14.620 ] 00:11:14.620 } 00:11:14.620 ] 00:11:14.620 } 00:11:14.620 13:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:14.620 13:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58717 00:11:14.620 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58717 ']' 00:11:14.620 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58717 00:11:14.620 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:14.620 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.620 13:05:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58717 00:11:14.620 killing process with pid 58717 00:11:14.620 13:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.620 13:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.620 13:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58717' 00:11:14.620 13:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58717 00:11:14.620 13:05:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58717 00:11:17.146 13:05:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58773 00:11:17.146 13:05:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:17.146 13:05:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:22.408 13:05:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58773 00:11:22.408 13:05:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58773 ']' 00:11:22.408 13:05:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58773 00:11:22.409 13:05:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:22.409 13:05:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.409 13:05:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58773 00:11:22.409 killing process with pid 58773 00:11:22.409 13:05:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.409 13:05:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.409 13:05:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58773' 00:11:22.409 13:05:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58773 00:11:22.409 13:05:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58773 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:24.309 00:11:24.309 real 0m10.938s 00:11:24.309 user 0m10.611s 00:11:24.309 sys 0m0.777s 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:24.309 ************************************ 00:11:24.309 END TEST skip_rpc_with_json 00:11:24.309 ************************************ 00:11:24.309 13:05:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:24.309 13:05:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.309 13:05:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.309 13:05:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.309 ************************************ 00:11:24.309 START TEST skip_rpc_with_delay 00:11:24.309 ************************************ 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:24.309 [2024-12-06 13:05:30.573743] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:24.309 00:11:24.309 real 0m0.179s 00:11:24.309 user 0m0.096s 00:11:24.309 sys 0m0.080s 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.309 13:05:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:24.309 ************************************ 00:11:24.309 END TEST skip_rpc_with_delay 00:11:24.309 ************************************ 00:11:24.309 13:05:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:24.309 13:05:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:24.309 13:05:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:24.309 13:05:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.309 13:05:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.309 13:05:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.309 ************************************ 00:11:24.309 START TEST exit_on_failed_rpc_init 00:11:24.309 ************************************ 00:11:24.309 13:05:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:11:24.309 13:05:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58901 00:11:24.310 13:05:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58901 00:11:24.310 13:05:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58901 ']' 00:11:24.310 13:05:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.310 13:05:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:24.310 13:05:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.310 13:05:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.310 13:05:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.310 13:05:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:24.310 [2024-12-06 13:05:30.813297] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:24.310 [2024-12-06 13:05:30.813708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58901 ] 00:11:24.567 [2024-12-06 13:05:31.009224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.823 [2024-12-06 13:05:31.151829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:25.754 13:05:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:25.754 [2024-12-06 13:05:32.114153] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:25.754 [2024-12-06 13:05:32.114522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:11:26.012 [2024-12-06 13:05:32.292327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.012 [2024-12-06 13:05:32.443759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.012 [2024-12-06 13:05:32.443948] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:26.012 [2024-12-06 13:05:32.443995] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:26.012 [2024-12-06 13:05:32.444042] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58901 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58901 ']' 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58901 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.269 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58901 00:11:26.525 killing process with pid 58901 00:11:26.525 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.525 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.525 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58901' 00:11:26.525 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58901 00:11:26.525 13:05:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58901 00:11:28.424 ************************************ 00:11:28.424 END TEST exit_on_failed_rpc_init 00:11:28.424 ************************************ 00:11:28.424 00:11:28.424 real 0m4.233s 00:11:28.424 user 0m4.804s 00:11:28.424 sys 0m0.551s 00:11:28.424 13:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.424 13:05:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:28.682 13:05:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:28.682 ************************************ 00:11:28.682 END TEST skip_rpc 00:11:28.682 ************************************ 00:11:28.682 00:11:28.682 real 0m22.895s 00:11:28.682 user 0m22.391s 00:11:28.682 sys 0m1.971s 00:11:28.682 13:05:34 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.682 13:05:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.682 13:05:34 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:28.682 13:05:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.682 13:05:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.682 13:05:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.682 ************************************ 00:11:28.682 START TEST rpc_client 00:11:28.682 ************************************ 00:11:28.682 13:05:34 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:28.682 * Looking for test storage... 00:11:28.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:28.682 13:05:35 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:28.682 13:05:35 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:28.682 13:05:35 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:11:28.682 13:05:35 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.682 13:05:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:11:28.682 13:05:35 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.682 13:05:35 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:28.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.682 --rc genhtml_branch_coverage=1 00:11:28.682 --rc genhtml_function_coverage=1 00:11:28.682 --rc genhtml_legend=1 00:11:28.682 --rc geninfo_all_blocks=1 00:11:28.682 --rc geninfo_unexecuted_blocks=1 00:11:28.682 00:11:28.682 ' 00:11:28.682 13:05:35 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:28.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.682 --rc genhtml_branch_coverage=1 00:11:28.682 --rc genhtml_function_coverage=1 00:11:28.682 --rc genhtml_legend=1 00:11:28.682 --rc geninfo_all_blocks=1 00:11:28.682 --rc geninfo_unexecuted_blocks=1 00:11:28.682 00:11:28.682 ' 00:11:28.682 13:05:35 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:28.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.682 --rc genhtml_branch_coverage=1 00:11:28.682 --rc genhtml_function_coverage=1 00:11:28.682 --rc genhtml_legend=1 00:11:28.682 --rc geninfo_all_blocks=1 00:11:28.682 --rc geninfo_unexecuted_blocks=1 00:11:28.682 00:11:28.682 ' 00:11:28.682 13:05:35 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:28.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.682 --rc genhtml_branch_coverage=1 00:11:28.682 --rc genhtml_function_coverage=1 00:11:28.682 --rc genhtml_legend=1 00:11:28.682 --rc geninfo_all_blocks=1 00:11:28.682 --rc geninfo_unexecuted_blocks=1 00:11:28.682 00:11:28.682 ' 00:11:28.682 13:05:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:28.940 OK 00:11:28.940 13:05:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:28.940 00:11:28.940 real 0m0.245s 00:11:28.940 user 0m0.149s 00:11:28.940 sys 0m0.106s 00:11:28.940 13:05:35 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.940 13:05:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:28.940 ************************************ 00:11:28.940 END TEST rpc_client 00:11:28.940 ************************************ 00:11:28.940 13:05:35 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:28.940 13:05:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.940 13:05:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.940 13:05:35 -- common/autotest_common.sh@10 -- # set +x 00:11:28.940 ************************************ 00:11:28.940 START TEST json_config 00:11:28.940 ************************************ 00:11:28.940 13:05:35 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:28.940 13:05:35 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:28.940 13:05:35 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:11:28.940 13:05:35 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:28.940 13:05:35 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:28.940 13:05:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.940 13:05:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.940 13:05:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.940 13:05:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.940 13:05:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.940 13:05:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.940 13:05:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.940 13:05:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.940 13:05:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.940 13:05:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.940 13:05:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.940 13:05:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:11:28.940 13:05:35 json_config -- scripts/common.sh@345 -- # : 1 00:11:28.940 13:05:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.940 13:05:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.940 13:05:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:11:28.940 13:05:35 json_config -- scripts/common.sh@353 -- # local d=1 00:11:28.940 13:05:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.940 13:05:35 json_config -- scripts/common.sh@355 -- # echo 1 00:11:28.940 13:05:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.940 13:05:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:11:28.940 13:05:35 json_config -- scripts/common.sh@353 -- # local d=2 00:11:28.940 13:05:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.940 13:05:35 json_config -- scripts/common.sh@355 -- # echo 2 00:11:28.940 13:05:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.940 13:05:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.940 13:05:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.940 13:05:35 json_config -- scripts/common.sh@368 -- # return 0 00:11:28.940 13:05:35 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.940 13:05:35 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:28.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.940 --rc genhtml_branch_coverage=1 00:11:28.940 --rc genhtml_function_coverage=1 00:11:28.940 --rc genhtml_legend=1 00:11:28.940 --rc geninfo_all_blocks=1 00:11:28.940 --rc geninfo_unexecuted_blocks=1 00:11:28.940 00:11:28.940 ' 00:11:28.940 13:05:35 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:28.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.940 --rc genhtml_branch_coverage=1 00:11:28.940 --rc genhtml_function_coverage=1 00:11:28.940 --rc genhtml_legend=1 00:11:28.940 --rc geninfo_all_blocks=1 00:11:28.940 --rc geninfo_unexecuted_blocks=1 00:11:28.940 00:11:28.940 ' 00:11:28.940 13:05:35 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:28.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.940 --rc genhtml_branch_coverage=1 00:11:28.940 --rc genhtml_function_coverage=1 00:11:28.940 --rc genhtml_legend=1 00:11:28.940 --rc geninfo_all_blocks=1 00:11:28.940 --rc geninfo_unexecuted_blocks=1 00:11:28.940 00:11:28.940 ' 00:11:28.940 13:05:35 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:28.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.940 --rc genhtml_branch_coverage=1 00:11:28.940 --rc genhtml_function_coverage=1 00:11:28.940 --rc genhtml_legend=1 00:11:28.940 --rc geninfo_all_blocks=1 00:11:28.940 --rc geninfo_unexecuted_blocks=1 00:11:28.940 00:11:28.940 ' 00:11:28.940 13:05:35 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e8b9e76b-c82e-4bbd-825d-5339581b2dd8 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e8b9e76b-c82e-4bbd-825d-5339581b2dd8 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.940 13:05:35 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:28.940 13:05:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.940 13:05:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.940 13:05:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.940 13:05:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.940 13:05:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.940 13:05:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.940 13:05:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.940 13:05:35 json_config -- paths/export.sh@5 -- # export PATH 00:11:28.941 13:05:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.941 13:05:35 json_config -- nvmf/common.sh@51 -- # : 0 00:11:28.941 13:05:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.941 13:05:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.941 13:05:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.198 13:05:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.198 13:05:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.198 13:05:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.198 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.198 13:05:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.198 13:05:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.198 13:05:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.198 13:05:35 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:29.198 13:05:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:29.198 13:05:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:29.198 13:05:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:29.198 13:05:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:29.198 13:05:35 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:11:29.198 WARNING: No tests are enabled so not running JSON configuration tests 00:11:29.198 13:05:35 json_config -- json_config/json_config.sh@28 -- # exit 0 00:11:29.198 00:11:29.198 real 0m0.184s 00:11:29.198 user 0m0.128s 00:11:29.198 sys 0m0.051s 00:11:29.198 ************************************ 00:11:29.198 END TEST json_config 00:11:29.198 ************************************ 00:11:29.198 13:05:35 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.199 13:05:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:29.199 13:05:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:29.199 13:05:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.199 13:05:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.199 13:05:35 -- common/autotest_common.sh@10 -- # set +x 00:11:29.199 ************************************ 00:11:29.199 START TEST json_config_extra_key 00:11:29.199 ************************************ 00:11:29.199 13:05:35 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:29.199 13:05:35 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:29.199 13:05:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:11:29.199 13:05:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:29.199 13:05:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:11:29.199 13:05:35 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.199 13:05:35 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:29.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.199 --rc genhtml_branch_coverage=1 00:11:29.199 --rc genhtml_function_coverage=1 00:11:29.199 --rc genhtml_legend=1 00:11:29.199 --rc geninfo_all_blocks=1 00:11:29.199 --rc geninfo_unexecuted_blocks=1 00:11:29.199 00:11:29.199 ' 00:11:29.199 13:05:35 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:29.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.199 --rc genhtml_branch_coverage=1 00:11:29.199 --rc genhtml_function_coverage=1 00:11:29.199 --rc genhtml_legend=1 00:11:29.199 --rc geninfo_all_blocks=1 00:11:29.199 --rc geninfo_unexecuted_blocks=1 00:11:29.199 00:11:29.199 ' 00:11:29.199 13:05:35 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:29.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.199 --rc genhtml_branch_coverage=1 00:11:29.199 --rc genhtml_function_coverage=1 00:11:29.199 --rc genhtml_legend=1 00:11:29.199 --rc geninfo_all_blocks=1 00:11:29.199 --rc geninfo_unexecuted_blocks=1 00:11:29.199 00:11:29.199 ' 00:11:29.199 13:05:35 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:29.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.199 --rc genhtml_branch_coverage=1 00:11:29.199 --rc genhtml_function_coverage=1 00:11:29.199 --rc genhtml_legend=1 00:11:29.199 --rc geninfo_all_blocks=1 00:11:29.199 --rc geninfo_unexecuted_blocks=1 00:11:29.199 00:11:29.199 ' 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e8b9e76b-c82e-4bbd-825d-5339581b2dd8 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e8b9e76b-c82e-4bbd-825d-5339581b2dd8 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.199 13:05:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.199 13:05:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.199 13:05:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.199 13:05:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.199 13:05:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:29.199 13:05:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.199 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.199 13:05:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:29.199 INFO: launching applications... 00:11:29.199 Waiting for target to run... 00:11:29.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:29.199 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:29.200 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:29.200 13:05:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59129 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59129 /var/tmp/spdk_tgt.sock 00:11:29.200 13:05:35 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59129 ']' 00:11:29.200 13:05:35 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:29.200 13:05:35 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.200 13:05:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:29.200 13:05:35 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:29.200 13:05:35 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.200 13:05:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:29.458 [2024-12-06 13:05:35.820624] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:29.458 [2024-12-06 13:05:35.821111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59129 ] 00:11:29.716 [2024-12-06 13:05:36.160033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.974 [2024-12-06 13:05:36.288515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.540 13:05:37 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.540 13:05:37 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:11:30.540 13:05:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:30.540 00:11:30.540 13:05:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:30.540 INFO: shutting down applications... 00:11:30.540 13:05:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:30.540 13:05:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:30.540 13:05:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:30.540 13:05:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59129 ]] 00:11:30.540 13:05:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59129 00:11:30.540 13:05:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:30.540 13:05:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:30.540 13:05:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59129 00:11:30.540 13:05:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:31.107 13:05:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:31.107 13:05:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:31.107 13:05:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59129 00:11:31.107 13:05:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:31.674 13:05:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:31.674 13:05:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:31.674 13:05:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59129 00:11:31.674 13:05:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:32.239 13:05:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:32.239 13:05:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:32.239 13:05:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59129 00:11:32.239 13:05:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:32.804 13:05:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:32.804 13:05:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:32.804 13:05:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59129 00:11:32.804 13:05:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:33.061 13:05:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:33.061 13:05:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:33.061 13:05:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59129 00:11:33.061 13:05:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:33.625 13:05:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:33.625 13:05:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:33.625 13:05:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59129 00:11:33.625 13:05:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:33.625 13:05:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:33.625 SPDK target shutdown done 00:11:33.625 13:05:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:33.625 13:05:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:33.625 13:05:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:33.625 Success 00:11:33.625 ************************************ 00:11:33.625 END TEST json_config_extra_key 00:11:33.625 ************************************ 00:11:33.625 00:11:33.625 real 0m4.549s 00:11:33.625 user 0m4.090s 00:11:33.625 sys 0m0.487s 00:11:33.626 13:05:40 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.626 13:05:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:33.626 13:05:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:33.626 13:05:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:33.626 13:05:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.626 13:05:40 -- common/autotest_common.sh@10 -- # set +x 00:11:33.626 ************************************ 00:11:33.626 START TEST alias_rpc 00:11:33.626 ************************************ 00:11:33.626 13:05:40 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:33.884 * Looking for test storage... 00:11:33.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:33.884 13:05:40 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:33.884 13:05:40 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:33.884 13:05:40 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:33.884 13:05:40 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.884 13:05:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:11:33.884 13:05:40 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.884 13:05:40 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:33.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.884 --rc genhtml_branch_coverage=1 00:11:33.884 --rc genhtml_function_coverage=1 00:11:33.884 --rc genhtml_legend=1 00:11:33.884 --rc geninfo_all_blocks=1 00:11:33.884 --rc geninfo_unexecuted_blocks=1 00:11:33.884 00:11:33.884 ' 00:11:33.884 13:05:40 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:33.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.884 --rc genhtml_branch_coverage=1 00:11:33.884 --rc genhtml_function_coverage=1 00:11:33.884 --rc genhtml_legend=1 00:11:33.884 --rc geninfo_all_blocks=1 00:11:33.884 --rc geninfo_unexecuted_blocks=1 00:11:33.884 00:11:33.884 ' 00:11:33.884 13:05:40 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:33.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.885 --rc genhtml_branch_coverage=1 00:11:33.885 --rc genhtml_function_coverage=1 00:11:33.885 --rc genhtml_legend=1 00:11:33.885 --rc geninfo_all_blocks=1 00:11:33.885 --rc geninfo_unexecuted_blocks=1 00:11:33.885 00:11:33.885 ' 00:11:33.885 13:05:40 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:33.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.885 --rc genhtml_branch_coverage=1 00:11:33.885 --rc genhtml_function_coverage=1 00:11:33.885 --rc genhtml_legend=1 00:11:33.885 --rc geninfo_all_blocks=1 00:11:33.885 --rc geninfo_unexecuted_blocks=1 00:11:33.885 00:11:33.885 ' 00:11:33.885 13:05:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:33.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.885 13:05:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59238 00:11:33.885 13:05:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59238 00:11:33.885 13:05:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:33.885 13:05:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59238 ']' 00:11:33.885 13:05:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.885 13:05:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.885 13:05:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.885 13:05:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.885 13:05:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.145 [2024-12-06 13:05:40.432043] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:34.145 [2024-12-06 13:05:40.432449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59238 ] 00:11:34.145 [2024-12-06 13:05:40.630718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.404 [2024-12-06 13:05:40.763556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.338 13:05:41 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.338 13:05:41 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:35.338 13:05:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:35.596 13:05:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59238 00:11:35.596 13:05:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59238 ']' 00:11:35.596 13:05:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59238 00:11:35.596 13:05:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:11:35.596 13:05:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.596 13:05:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59238 00:11:35.596 killing process with pid 59238 00:11:35.596 13:05:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.596 13:05:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.596 13:05:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59238' 00:11:35.596 13:05:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 59238 00:11:35.596 13:05:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 59238 00:11:38.127 ************************************ 00:11:38.127 END TEST alias_rpc 00:11:38.127 ************************************ 00:11:38.127 00:11:38.127 real 0m4.016s 00:11:38.127 user 0m4.431s 00:11:38.127 sys 0m0.503s 00:11:38.127 13:05:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.127 13:05:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.127 13:05:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:11:38.127 13:05:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:38.127 13:05:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:38.127 13:05:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.127 13:05:44 -- common/autotest_common.sh@10 -- # set +x 00:11:38.127 ************************************ 00:11:38.127 START TEST spdkcli_tcp 00:11:38.127 ************************************ 00:11:38.127 13:05:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:38.127 * Looking for test storage... 00:11:38.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:38.127 13:05:44 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.127 13:05:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.127 13:05:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.127 13:05:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.127 13:05:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:11:38.127 13:05:44 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.127 13:05:44 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.127 --rc genhtml_branch_coverage=1 00:11:38.127 --rc genhtml_function_coverage=1 00:11:38.127 --rc genhtml_legend=1 00:11:38.127 --rc geninfo_all_blocks=1 00:11:38.127 --rc geninfo_unexecuted_blocks=1 00:11:38.127 00:11:38.127 ' 00:11:38.127 13:05:44 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.127 --rc genhtml_branch_coverage=1 00:11:38.127 --rc genhtml_function_coverage=1 00:11:38.127 --rc genhtml_legend=1 00:11:38.127 --rc geninfo_all_blocks=1 00:11:38.127 --rc geninfo_unexecuted_blocks=1 00:11:38.127 00:11:38.127 ' 00:11:38.127 13:05:44 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.127 --rc genhtml_branch_coverage=1 00:11:38.127 --rc genhtml_function_coverage=1 00:11:38.127 --rc genhtml_legend=1 00:11:38.127 --rc geninfo_all_blocks=1 00:11:38.127 --rc geninfo_unexecuted_blocks=1 00:11:38.127 00:11:38.127 ' 00:11:38.127 13:05:44 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.127 --rc genhtml_branch_coverage=1 00:11:38.127 --rc genhtml_function_coverage=1 00:11:38.127 --rc genhtml_legend=1 00:11:38.127 --rc geninfo_all_blocks=1 00:11:38.127 --rc geninfo_unexecuted_blocks=1 00:11:38.127 00:11:38.127 ' 00:11:38.128 13:05:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:38.128 13:05:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:38.128 13:05:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:38.128 13:05:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:38.128 13:05:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:38.128 13:05:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:38.128 13:05:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:38.128 13:05:44 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.128 13:05:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:38.128 13:05:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59344 00:11:38.128 13:05:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:38.128 13:05:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59344 00:11:38.128 13:05:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59344 ']' 00:11:38.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.128 13:05:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.128 13:05:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.128 13:05:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.128 13:05:44 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.128 13:05:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:38.128 [2024-12-06 13:05:44.456427] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:38.128 [2024-12-06 13:05:44.456585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59344 ] 00:11:38.128 [2024-12-06 13:05:44.649264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:38.401 [2024-12-06 13:05:44.753273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.401 [2024-12-06 13:05:44.753290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.027 13:05:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.027 13:05:45 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:11:39.027 13:05:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59367 00:11:39.027 13:05:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:39.027 13:05:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:39.592 [ 00:11:39.592 "bdev_malloc_delete", 00:11:39.592 "bdev_malloc_create", 00:11:39.592 "bdev_null_resize", 00:11:39.592 "bdev_null_delete", 00:11:39.592 "bdev_null_create", 00:11:39.592 "bdev_nvme_cuse_unregister", 00:11:39.592 "bdev_nvme_cuse_register", 00:11:39.592 "bdev_opal_new_user", 00:11:39.592 "bdev_opal_set_lock_state", 00:11:39.592 "bdev_opal_delete", 00:11:39.592 "bdev_opal_get_info", 00:11:39.592 "bdev_opal_create", 00:11:39.592 "bdev_nvme_opal_revert", 00:11:39.592 "bdev_nvme_opal_init", 00:11:39.592 "bdev_nvme_send_cmd", 00:11:39.593 "bdev_nvme_set_keys", 00:11:39.593 "bdev_nvme_get_path_iostat", 00:11:39.593 "bdev_nvme_get_mdns_discovery_info", 00:11:39.593 "bdev_nvme_stop_mdns_discovery", 00:11:39.593 "bdev_nvme_start_mdns_discovery", 00:11:39.593 "bdev_nvme_set_multipath_policy", 00:11:39.593 "bdev_nvme_set_preferred_path", 00:11:39.593 "bdev_nvme_get_io_paths", 00:11:39.593 "bdev_nvme_remove_error_injection", 00:11:39.593 "bdev_nvme_add_error_injection", 00:11:39.593 "bdev_nvme_get_discovery_info", 00:11:39.593 "bdev_nvme_stop_discovery", 00:11:39.593 "bdev_nvme_start_discovery", 00:11:39.593 "bdev_nvme_get_controller_health_info", 00:11:39.593 "bdev_nvme_disable_controller", 00:11:39.593 "bdev_nvme_enable_controller", 00:11:39.593 "bdev_nvme_reset_controller", 00:11:39.593 "bdev_nvme_get_transport_statistics", 00:11:39.593 "bdev_nvme_apply_firmware", 00:11:39.593 "bdev_nvme_detach_controller", 00:11:39.593 "bdev_nvme_get_controllers", 00:11:39.593 "bdev_nvme_attach_controller", 00:11:39.593 "bdev_nvme_set_hotplug", 00:11:39.593 "bdev_nvme_set_options", 00:11:39.593 "bdev_passthru_delete", 00:11:39.593 "bdev_passthru_create", 00:11:39.593 "bdev_lvol_set_parent_bdev", 00:11:39.593 "bdev_lvol_set_parent", 00:11:39.593 "bdev_lvol_check_shallow_copy", 00:11:39.593 "bdev_lvol_start_shallow_copy", 00:11:39.593 "bdev_lvol_grow_lvstore", 00:11:39.593 "bdev_lvol_get_lvols", 00:11:39.593 "bdev_lvol_get_lvstores", 00:11:39.593 "bdev_lvol_delete", 00:11:39.593 "bdev_lvol_set_read_only", 00:11:39.593 "bdev_lvol_resize", 00:11:39.593 "bdev_lvol_decouple_parent", 00:11:39.593 "bdev_lvol_inflate", 00:11:39.593 "bdev_lvol_rename", 00:11:39.593 "bdev_lvol_clone_bdev", 00:11:39.593 "bdev_lvol_clone", 00:11:39.593 "bdev_lvol_snapshot", 00:11:39.593 "bdev_lvol_create", 00:11:39.593 "bdev_lvol_delete_lvstore", 00:11:39.593 "bdev_lvol_rename_lvstore", 00:11:39.593 "bdev_lvol_create_lvstore", 00:11:39.593 "bdev_raid_set_options", 00:11:39.593 "bdev_raid_remove_base_bdev", 00:11:39.593 "bdev_raid_add_base_bdev", 00:11:39.593 "bdev_raid_delete", 00:11:39.593 "bdev_raid_create", 00:11:39.593 "bdev_raid_get_bdevs", 00:11:39.593 "bdev_error_inject_error", 00:11:39.593 "bdev_error_delete", 00:11:39.593 "bdev_error_create", 00:11:39.593 "bdev_split_delete", 00:11:39.593 "bdev_split_create", 00:11:39.593 "bdev_delay_delete", 00:11:39.593 "bdev_delay_create", 00:11:39.593 "bdev_delay_update_latency", 00:11:39.593 "bdev_zone_block_delete", 00:11:39.593 "bdev_zone_block_create", 00:11:39.593 "blobfs_create", 00:11:39.593 "blobfs_detect", 00:11:39.593 "blobfs_set_cache_size", 00:11:39.593 "bdev_xnvme_delete", 00:11:39.593 "bdev_xnvme_create", 00:11:39.593 "bdev_aio_delete", 00:11:39.593 "bdev_aio_rescan", 00:11:39.593 "bdev_aio_create", 00:11:39.593 "bdev_ftl_set_property", 00:11:39.593 "bdev_ftl_get_properties", 00:11:39.593 "bdev_ftl_get_stats", 00:11:39.593 "bdev_ftl_unmap", 00:11:39.593 "bdev_ftl_unload", 00:11:39.593 "bdev_ftl_delete", 00:11:39.593 "bdev_ftl_load", 00:11:39.593 "bdev_ftl_create", 00:11:39.593 "bdev_virtio_attach_controller", 00:11:39.593 "bdev_virtio_scsi_get_devices", 00:11:39.593 "bdev_virtio_detach_controller", 00:11:39.593 "bdev_virtio_blk_set_hotplug", 00:11:39.593 "bdev_iscsi_delete", 00:11:39.593 "bdev_iscsi_create", 00:11:39.593 "bdev_iscsi_set_options", 00:11:39.593 "accel_error_inject_error", 00:11:39.593 "ioat_scan_accel_module", 00:11:39.593 "dsa_scan_accel_module", 00:11:39.593 "iaa_scan_accel_module", 00:11:39.593 "keyring_file_remove_key", 00:11:39.593 "keyring_file_add_key", 00:11:39.593 "keyring_linux_set_options", 00:11:39.593 "fsdev_aio_delete", 00:11:39.593 "fsdev_aio_create", 00:11:39.593 "iscsi_get_histogram", 00:11:39.593 "iscsi_enable_histogram", 00:11:39.593 "iscsi_set_options", 00:11:39.593 "iscsi_get_auth_groups", 00:11:39.593 "iscsi_auth_group_remove_secret", 00:11:39.593 "iscsi_auth_group_add_secret", 00:11:39.593 "iscsi_delete_auth_group", 00:11:39.593 "iscsi_create_auth_group", 00:11:39.593 "iscsi_set_discovery_auth", 00:11:39.593 "iscsi_get_options", 00:11:39.593 "iscsi_target_node_request_logout", 00:11:39.593 "iscsi_target_node_set_redirect", 00:11:39.593 "iscsi_target_node_set_auth", 00:11:39.593 "iscsi_target_node_add_lun", 00:11:39.593 "iscsi_get_stats", 00:11:39.593 "iscsi_get_connections", 00:11:39.593 "iscsi_portal_group_set_auth", 00:11:39.593 "iscsi_start_portal_group", 00:11:39.593 "iscsi_delete_portal_group", 00:11:39.593 "iscsi_create_portal_group", 00:11:39.593 "iscsi_get_portal_groups", 00:11:39.593 "iscsi_delete_target_node", 00:11:39.593 "iscsi_target_node_remove_pg_ig_maps", 00:11:39.593 "iscsi_target_node_add_pg_ig_maps", 00:11:39.593 "iscsi_create_target_node", 00:11:39.593 "iscsi_get_target_nodes", 00:11:39.593 "iscsi_delete_initiator_group", 00:11:39.593 "iscsi_initiator_group_remove_initiators", 00:11:39.593 "iscsi_initiator_group_add_initiators", 00:11:39.593 "iscsi_create_initiator_group", 00:11:39.593 "iscsi_get_initiator_groups", 00:11:39.593 "nvmf_set_crdt", 00:11:39.593 "nvmf_set_config", 00:11:39.593 "nvmf_set_max_subsystems", 00:11:39.593 "nvmf_stop_mdns_prr", 00:11:39.593 "nvmf_publish_mdns_prr", 00:11:39.593 "nvmf_subsystem_get_listeners", 00:11:39.593 "nvmf_subsystem_get_qpairs", 00:11:39.593 "nvmf_subsystem_get_controllers", 00:11:39.593 "nvmf_get_stats", 00:11:39.593 "nvmf_get_transports", 00:11:39.593 "nvmf_create_transport", 00:11:39.593 "nvmf_get_targets", 00:11:39.593 "nvmf_delete_target", 00:11:39.593 "nvmf_create_target", 00:11:39.593 "nvmf_subsystem_allow_any_host", 00:11:39.593 "nvmf_subsystem_set_keys", 00:11:39.593 "nvmf_subsystem_remove_host", 00:11:39.593 "nvmf_subsystem_add_host", 00:11:39.593 "nvmf_ns_remove_host", 00:11:39.593 "nvmf_ns_add_host", 00:11:39.593 "nvmf_subsystem_remove_ns", 00:11:39.593 "nvmf_subsystem_set_ns_ana_group", 00:11:39.593 "nvmf_subsystem_add_ns", 00:11:39.593 "nvmf_subsystem_listener_set_ana_state", 00:11:39.593 "nvmf_discovery_get_referrals", 00:11:39.593 "nvmf_discovery_remove_referral", 00:11:39.593 "nvmf_discovery_add_referral", 00:11:39.593 "nvmf_subsystem_remove_listener", 00:11:39.593 "nvmf_subsystem_add_listener", 00:11:39.593 "nvmf_delete_subsystem", 00:11:39.593 "nvmf_create_subsystem", 00:11:39.593 "nvmf_get_subsystems", 00:11:39.593 "env_dpdk_get_mem_stats", 00:11:39.593 "nbd_get_disks", 00:11:39.593 "nbd_stop_disk", 00:11:39.593 "nbd_start_disk", 00:11:39.593 "ublk_recover_disk", 00:11:39.593 "ublk_get_disks", 00:11:39.593 "ublk_stop_disk", 00:11:39.593 "ublk_start_disk", 00:11:39.593 "ublk_destroy_target", 00:11:39.593 "ublk_create_target", 00:11:39.593 "virtio_blk_create_transport", 00:11:39.593 "virtio_blk_get_transports", 00:11:39.593 "vhost_controller_set_coalescing", 00:11:39.593 "vhost_get_controllers", 00:11:39.593 "vhost_delete_controller", 00:11:39.593 "vhost_create_blk_controller", 00:11:39.593 "vhost_scsi_controller_remove_target", 00:11:39.593 "vhost_scsi_controller_add_target", 00:11:39.593 "vhost_start_scsi_controller", 00:11:39.593 "vhost_create_scsi_controller", 00:11:39.593 "thread_set_cpumask", 00:11:39.593 "scheduler_set_options", 00:11:39.593 "framework_get_governor", 00:11:39.593 "framework_get_scheduler", 00:11:39.593 "framework_set_scheduler", 00:11:39.593 "framework_get_reactors", 00:11:39.593 "thread_get_io_channels", 00:11:39.593 "thread_get_pollers", 00:11:39.593 "thread_get_stats", 00:11:39.593 "framework_monitor_context_switch", 00:11:39.593 "spdk_kill_instance", 00:11:39.593 "log_enable_timestamps", 00:11:39.593 "log_get_flags", 00:11:39.593 "log_clear_flag", 00:11:39.593 "log_set_flag", 00:11:39.593 "log_get_level", 00:11:39.593 "log_set_level", 00:11:39.593 "log_get_print_level", 00:11:39.593 "log_set_print_level", 00:11:39.593 "framework_enable_cpumask_locks", 00:11:39.593 "framework_disable_cpumask_locks", 00:11:39.593 "framework_wait_init", 00:11:39.593 "framework_start_init", 00:11:39.593 "scsi_get_devices", 00:11:39.593 "bdev_get_histogram", 00:11:39.593 "bdev_enable_histogram", 00:11:39.593 "bdev_set_qos_limit", 00:11:39.593 "bdev_set_qd_sampling_period", 00:11:39.593 "bdev_get_bdevs", 00:11:39.593 "bdev_reset_iostat", 00:11:39.593 "bdev_get_iostat", 00:11:39.593 "bdev_examine", 00:11:39.593 "bdev_wait_for_examine", 00:11:39.593 "bdev_set_options", 00:11:39.593 "accel_get_stats", 00:11:39.593 "accel_set_options", 00:11:39.593 "accel_set_driver", 00:11:39.593 "accel_crypto_key_destroy", 00:11:39.593 "accel_crypto_keys_get", 00:11:39.593 "accel_crypto_key_create", 00:11:39.593 "accel_assign_opc", 00:11:39.593 "accel_get_module_info", 00:11:39.593 "accel_get_opc_assignments", 00:11:39.593 "vmd_rescan", 00:11:39.593 "vmd_remove_device", 00:11:39.593 "vmd_enable", 00:11:39.593 "sock_get_default_impl", 00:11:39.593 "sock_set_default_impl", 00:11:39.593 "sock_impl_set_options", 00:11:39.593 "sock_impl_get_options", 00:11:39.593 "iobuf_get_stats", 00:11:39.593 "iobuf_set_options", 00:11:39.593 "keyring_get_keys", 00:11:39.593 "framework_get_pci_devices", 00:11:39.593 "framework_get_config", 00:11:39.593 "framework_get_subsystems", 00:11:39.593 "fsdev_set_opts", 00:11:39.593 "fsdev_get_opts", 00:11:39.593 "trace_get_info", 00:11:39.593 "trace_get_tpoint_group_mask", 00:11:39.593 "trace_disable_tpoint_group", 00:11:39.593 "trace_enable_tpoint_group", 00:11:39.593 "trace_clear_tpoint_mask", 00:11:39.593 "trace_set_tpoint_mask", 00:11:39.593 "notify_get_notifications", 00:11:39.593 "notify_get_types", 00:11:39.593 "spdk_get_version", 00:11:39.593 "rpc_get_methods" 00:11:39.593 ] 00:11:39.593 13:05:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.593 13:05:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:39.593 13:05:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59344 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59344 ']' 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59344 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59344 00:11:39.593 killing process with pid 59344 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59344' 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59344 00:11:39.593 13:05:45 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59344 00:11:42.126 ************************************ 00:11:42.126 END TEST spdkcli_tcp 00:11:42.126 ************************************ 00:11:42.126 00:11:42.126 real 0m3.863s 00:11:42.126 user 0m7.146s 00:11:42.126 sys 0m0.519s 00:11:42.126 13:05:48 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.126 13:05:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:42.126 13:05:48 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:42.126 13:05:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.126 13:05:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.126 13:05:48 -- common/autotest_common.sh@10 -- # set +x 00:11:42.126 ************************************ 00:11:42.126 START TEST dpdk_mem_utility 00:11:42.126 ************************************ 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:42.126 * Looking for test storage... 00:11:42.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.126 13:05:48 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.126 --rc genhtml_branch_coverage=1 00:11:42.126 --rc genhtml_function_coverage=1 00:11:42.126 --rc genhtml_legend=1 00:11:42.126 --rc geninfo_all_blocks=1 00:11:42.126 --rc geninfo_unexecuted_blocks=1 00:11:42.126 00:11:42.126 ' 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.126 --rc genhtml_branch_coverage=1 00:11:42.126 --rc genhtml_function_coverage=1 00:11:42.126 --rc genhtml_legend=1 00:11:42.126 --rc geninfo_all_blocks=1 00:11:42.126 --rc geninfo_unexecuted_blocks=1 00:11:42.126 00:11:42.126 ' 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.126 --rc genhtml_branch_coverage=1 00:11:42.126 --rc genhtml_function_coverage=1 00:11:42.126 --rc genhtml_legend=1 00:11:42.126 --rc geninfo_all_blocks=1 00:11:42.126 --rc geninfo_unexecuted_blocks=1 00:11:42.126 00:11:42.126 ' 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.126 --rc genhtml_branch_coverage=1 00:11:42.126 --rc genhtml_function_coverage=1 00:11:42.126 --rc genhtml_legend=1 00:11:42.126 --rc geninfo_all_blocks=1 00:11:42.126 --rc geninfo_unexecuted_blocks=1 00:11:42.126 00:11:42.126 ' 00:11:42.126 13:05:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:42.126 13:05:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59469 00:11:42.126 13:05:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:42.126 13:05:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59469 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59469 ']' 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.126 13:05:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:42.126 [2024-12-06 13:05:48.418522] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:42.126 [2024-12-06 13:05:48.418945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59469 ] 00:11:42.126 [2024-12-06 13:05:48.595465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.394 [2024-12-06 13:05:48.710820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.330 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.330 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:11:43.330 13:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:43.330 13:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:43.330 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.330 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:43.330 { 00:11:43.330 "filename": "/tmp/spdk_mem_dump.txt" 00:11:43.330 } 00:11:43.330 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.330 13:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:43.330 DPDK memory size 824.000000 MiB in 1 heap(s) 00:11:43.330 1 heaps totaling size 824.000000 MiB 00:11:43.330 size: 824.000000 MiB heap id: 0 00:11:43.330 end heaps---------- 00:11:43.330 9 mempools totaling size 603.782043 MiB 00:11:43.330 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:43.330 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:43.330 size: 100.555481 MiB name: bdev_io_59469 00:11:43.330 size: 50.003479 MiB name: msgpool_59469 00:11:43.330 size: 36.509338 MiB name: fsdev_io_59469 00:11:43.330 size: 21.763794 MiB name: PDU_Pool 00:11:43.330 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:43.330 size: 4.133484 MiB name: evtpool_59469 00:11:43.330 size: 0.026123 MiB name: Session_Pool 00:11:43.330 end mempools------- 00:11:43.330 6 memzones totaling size 4.142822 MiB 00:11:43.330 size: 1.000366 MiB name: RG_ring_0_59469 00:11:43.330 size: 1.000366 MiB name: RG_ring_1_59469 00:11:43.330 size: 1.000366 MiB name: RG_ring_4_59469 00:11:43.330 size: 1.000366 MiB name: RG_ring_5_59469 00:11:43.330 size: 0.125366 MiB name: RG_ring_2_59469 00:11:43.330 size: 0.015991 MiB name: RG_ring_3_59469 00:11:43.330 end memzones------- 00:11:43.330 13:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:43.330 heap id: 0 total size: 824.000000 MiB number of busy elements: 312 number of free elements: 18 00:11:43.330 list of free elements. size: 16.782104 MiB 00:11:43.330 element at address: 0x200006400000 with size: 1.995972 MiB 00:11:43.330 element at address: 0x20000a600000 with size: 1.995972 MiB 00:11:43.330 element at address: 0x200003e00000 with size: 1.991028 MiB 00:11:43.330 element at address: 0x200019500040 with size: 0.999939 MiB 00:11:43.330 element at address: 0x200019900040 with size: 0.999939 MiB 00:11:43.330 element at address: 0x200019a00000 with size: 0.999084 MiB 00:11:43.330 element at address: 0x200032600000 with size: 0.994324 MiB 00:11:43.330 element at address: 0x200000400000 with size: 0.992004 MiB 00:11:43.330 element at address: 0x200019200000 with size: 0.959656 MiB 00:11:43.330 element at address: 0x200019d00040 with size: 0.936401 MiB 00:11:43.330 element at address: 0x200000200000 with size: 0.716980 MiB 00:11:43.330 element at address: 0x20001b400000 with size: 0.563660 MiB 00:11:43.330 element at address: 0x200000c00000 with size: 0.489197 MiB 00:11:43.330 element at address: 0x200019600000 with size: 0.487976 MiB 00:11:43.330 element at address: 0x200019e00000 with size: 0.485413 MiB 00:11:43.330 element at address: 0x200012c00000 with size: 0.433228 MiB 00:11:43.330 element at address: 0x200028800000 with size: 0.390442 MiB 00:11:43.330 element at address: 0x200000800000 with size: 0.350891 MiB 00:11:43.330 list of standard malloc elements. size: 199.286987 MiB 00:11:43.330 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:11:43.330 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:11:43.330 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:11:43.330 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:11:43.330 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:11:43.330 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:11:43.330 element at address: 0x200019deff40 with size: 0.062683 MiB 00:11:43.330 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:11:43.330 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:11:43.330 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:11:43.330 element at address: 0x200012bff040 with size: 0.000305 MiB 00:11:43.330 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:11:43.330 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:11:43.330 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:11:43.330 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200000cff000 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bff180 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bff280 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bff380 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bff480 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bff580 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bff680 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bff780 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bff880 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bff980 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200019affc40 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:11:43.331 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:11:43.332 element at address: 0x200028863f40 with size: 0.000244 MiB 00:11:43.332 element at address: 0x200028864040 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886af80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886b080 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886b180 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886b280 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886b380 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886b480 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886b580 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886b680 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886b780 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886b880 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886b980 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886be80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886c080 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886c180 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886c280 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886c380 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886c480 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886c580 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886c680 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886c780 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886c880 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886c980 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886d080 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886d180 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886d280 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886d380 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886d480 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886d580 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886d680 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886d780 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886d880 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886d980 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886da80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886db80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886de80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886df80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886e080 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886e180 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886e280 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886e380 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886e480 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886e580 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886e680 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886e780 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886e880 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886e980 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886f080 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886f180 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886f280 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886f380 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886f480 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886f580 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886f680 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886f780 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886f880 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886f980 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:11:43.332 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:11:43.332 list of memzone associated elements. size: 607.930908 MiB 00:11:43.332 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:11:43.332 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:43.332 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:11:43.332 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:43.332 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:11:43.332 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59469_0 00:11:43.332 element at address: 0x200000dff340 with size: 48.003113 MiB 00:11:43.332 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59469_0 00:11:43.332 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:11:43.332 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59469_0 00:11:43.332 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:11:43.332 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:43.332 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:11:43.332 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:43.332 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:11:43.332 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59469_0 00:11:43.332 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:11:43.332 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59469 00:11:43.332 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:11:43.332 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59469 00:11:43.332 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:11:43.332 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:43.332 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:11:43.332 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:43.332 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:11:43.332 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:43.332 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:11:43.332 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:43.332 element at address: 0x200000cff100 with size: 1.000549 MiB 00:11:43.332 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59469 00:11:43.333 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:11:43.333 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59469 00:11:43.333 element at address: 0x200019affd40 with size: 1.000549 MiB 00:11:43.333 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59469 00:11:43.333 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:11:43.333 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59469 00:11:43.333 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:11:43.333 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59469 00:11:43.333 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:11:43.333 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59469 00:11:43.333 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:11:43.333 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:43.333 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:11:43.333 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:43.333 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:11:43.333 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:43.333 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:11:43.333 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59469 00:11:43.333 element at address: 0x20000085df80 with size: 0.125549 MiB 00:11:43.333 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59469 00:11:43.333 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:11:43.333 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:43.333 element at address: 0x200028864140 with size: 0.023804 MiB 00:11:43.333 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:43.333 element at address: 0x200000859d40 with size: 0.016174 MiB 00:11:43.333 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59469 00:11:43.333 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:11:43.333 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:43.333 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:11:43.333 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59469 00:11:43.333 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:11:43.333 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59469 00:11:43.333 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:11:43.333 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59469 00:11:43.333 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:11:43.333 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:43.333 13:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:43.333 13:05:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59469 00:11:43.333 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59469 ']' 00:11:43.333 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59469 00:11:43.333 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:11:43.333 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.333 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59469 00:11:43.333 killing process with pid 59469 00:11:43.333 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.333 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.333 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59469' 00:11:43.333 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59469 00:11:43.333 13:05:49 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59469 00:11:45.863 ************************************ 00:11:45.863 END TEST dpdk_mem_utility 00:11:45.863 ************************************ 00:11:45.863 00:11:45.863 real 0m3.710s 00:11:45.863 user 0m3.952s 00:11:45.863 sys 0m0.472s 00:11:45.863 13:05:51 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.863 13:05:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:45.863 13:05:51 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:45.863 13:05:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:45.863 13:05:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.863 13:05:51 -- common/autotest_common.sh@10 -- # set +x 00:11:45.863 ************************************ 00:11:45.863 START TEST event 00:11:45.863 ************************************ 00:11:45.863 13:05:51 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:45.863 * Looking for test storage... 00:11:45.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:45.863 13:05:51 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:45.863 13:05:51 event -- common/autotest_common.sh@1711 -- # lcov --version 00:11:45.863 13:05:51 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:45.863 13:05:52 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:45.863 13:05:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.863 13:05:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.863 13:05:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.863 13:05:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.863 13:05:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.863 13:05:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.863 13:05:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.863 13:05:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.863 13:05:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.863 13:05:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.863 13:05:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.863 13:05:52 event -- scripts/common.sh@344 -- # case "$op" in 00:11:45.863 13:05:52 event -- scripts/common.sh@345 -- # : 1 00:11:45.863 13:05:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.863 13:05:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.863 13:05:52 event -- scripts/common.sh@365 -- # decimal 1 00:11:45.863 13:05:52 event -- scripts/common.sh@353 -- # local d=1 00:11:45.863 13:05:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.863 13:05:52 event -- scripts/common.sh@355 -- # echo 1 00:11:45.863 13:05:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.863 13:05:52 event -- scripts/common.sh@366 -- # decimal 2 00:11:45.863 13:05:52 event -- scripts/common.sh@353 -- # local d=2 00:11:45.863 13:05:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.863 13:05:52 event -- scripts/common.sh@355 -- # echo 2 00:11:45.863 13:05:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.863 13:05:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.863 13:05:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.863 13:05:52 event -- scripts/common.sh@368 -- # return 0 00:11:45.863 13:05:52 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.863 13:05:52 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.863 --rc genhtml_branch_coverage=1 00:11:45.863 --rc genhtml_function_coverage=1 00:11:45.863 --rc genhtml_legend=1 00:11:45.863 --rc geninfo_all_blocks=1 00:11:45.863 --rc geninfo_unexecuted_blocks=1 00:11:45.863 00:11:45.863 ' 00:11:45.863 13:05:52 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.863 --rc genhtml_branch_coverage=1 00:11:45.863 --rc genhtml_function_coverage=1 00:11:45.863 --rc genhtml_legend=1 00:11:45.863 --rc geninfo_all_blocks=1 00:11:45.863 --rc geninfo_unexecuted_blocks=1 00:11:45.863 00:11:45.863 ' 00:11:45.863 13:05:52 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.863 --rc genhtml_branch_coverage=1 00:11:45.863 --rc genhtml_function_coverage=1 00:11:45.863 --rc genhtml_legend=1 00:11:45.863 --rc geninfo_all_blocks=1 00:11:45.863 --rc geninfo_unexecuted_blocks=1 00:11:45.863 00:11:45.863 ' 00:11:45.863 13:05:52 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.863 --rc genhtml_branch_coverage=1 00:11:45.863 --rc genhtml_function_coverage=1 00:11:45.863 --rc genhtml_legend=1 00:11:45.863 --rc geninfo_all_blocks=1 00:11:45.863 --rc geninfo_unexecuted_blocks=1 00:11:45.863 00:11:45.863 ' 00:11:45.863 13:05:52 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:45.863 13:05:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:45.863 13:05:52 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:45.863 13:05:52 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:45.863 13:05:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.863 13:05:52 event -- common/autotest_common.sh@10 -- # set +x 00:11:45.863 ************************************ 00:11:45.863 START TEST event_perf 00:11:45.863 ************************************ 00:11:45.863 13:05:52 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:45.863 Running I/O for 1 seconds...[2024-12-06 13:05:52.087702] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:45.863 [2024-12-06 13:05:52.088163] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59577 ] 00:11:45.863 [2024-12-06 13:05:52.281321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.121 [2024-12-06 13:05:52.392580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.121 [2024-12-06 13:05:52.392758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.121 Running I/O for 1 seconds...[2024-12-06 13:05:52.392895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.121 [2024-12-06 13:05:52.392906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.494 00:11:47.494 lcore 0: 194809 00:11:47.494 lcore 1: 194808 00:11:47.494 lcore 2: 194807 00:11:47.494 lcore 3: 194807 00:11:47.494 done. 00:11:47.494 00:11:47.494 real 0m1.598s 00:11:47.494 user 0m4.360s 00:11:47.494 sys 0m0.114s 00:11:47.494 13:05:53 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.494 13:05:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:47.494 ************************************ 00:11:47.494 END TEST event_perf 00:11:47.494 ************************************ 00:11:47.495 13:05:53 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:47.495 13:05:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:47.495 13:05:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.495 13:05:53 event -- common/autotest_common.sh@10 -- # set +x 00:11:47.495 ************************************ 00:11:47.495 START TEST event_reactor 00:11:47.495 ************************************ 00:11:47.495 13:05:53 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:47.495 [2024-12-06 13:05:53.720328] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:47.495 [2024-12-06 13:05:53.720723] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59617 ] 00:11:47.495 [2024-12-06 13:05:53.892825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.495 [2024-12-06 13:05:54.001787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.934 test_start 00:11:48.934 oneshot 00:11:48.934 tick 100 00:11:48.934 tick 100 00:11:48.934 tick 250 00:11:48.934 tick 100 00:11:48.934 tick 100 00:11:48.934 tick 100 00:11:48.934 tick 250 00:11:48.934 tick 500 00:11:48.934 tick 100 00:11:48.934 tick 100 00:11:48.934 tick 250 00:11:48.934 tick 100 00:11:48.934 tick 100 00:11:48.934 test_end 00:11:48.934 00:11:48.934 real 0m1.562s 00:11:48.934 user 0m1.372s 00:11:48.934 sys 0m0.079s 00:11:48.934 ************************************ 00:11:48.934 END TEST event_reactor 00:11:48.934 ************************************ 00:11:48.934 13:05:55 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.934 13:05:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:48.934 13:05:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:48.934 13:05:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:48.934 13:05:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.934 13:05:55 event -- common/autotest_common.sh@10 -- # set +x 00:11:48.934 ************************************ 00:11:48.934 START TEST event_reactor_perf 00:11:48.934 ************************************ 00:11:48.934 13:05:55 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:48.934 [2024-12-06 13:05:55.329132] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:48.934 [2024-12-06 13:05:55.329472] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59653 ] 00:11:49.191 [2024-12-06 13:05:55.502297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.191 [2024-12-06 13:05:55.604610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.564 test_start 00:11:50.564 test_end 00:11:50.564 Performance: 274676 events per second 00:11:50.564 00:11:50.564 real 0m1.541s 00:11:50.564 user 0m1.357s 00:11:50.564 sys 0m0.074s 00:11:50.564 13:05:56 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.564 13:05:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:50.564 ************************************ 00:11:50.564 END TEST event_reactor_perf 00:11:50.564 ************************************ 00:11:50.564 13:05:56 event -- event/event.sh@49 -- # uname -s 00:11:50.564 13:05:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:50.564 13:05:56 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:50.564 13:05:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:50.564 13:05:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.564 13:05:56 event -- common/autotest_common.sh@10 -- # set +x 00:11:50.564 ************************************ 00:11:50.564 START TEST event_scheduler 00:11:50.564 ************************************ 00:11:50.564 13:05:56 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:50.564 * Looking for test storage... 00:11:50.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:50.564 13:05:56 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:50.564 13:05:56 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:50.564 13:05:56 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:11:50.564 13:05:57 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:50.564 13:05:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.565 13:05:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.565 --rc genhtml_branch_coverage=1 00:11:50.565 --rc genhtml_function_coverage=1 00:11:50.565 --rc genhtml_legend=1 00:11:50.565 --rc geninfo_all_blocks=1 00:11:50.565 --rc geninfo_unexecuted_blocks=1 00:11:50.565 00:11:50.565 ' 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.565 --rc genhtml_branch_coverage=1 00:11:50.565 --rc genhtml_function_coverage=1 00:11:50.565 --rc genhtml_legend=1 00:11:50.565 --rc geninfo_all_blocks=1 00:11:50.565 --rc geninfo_unexecuted_blocks=1 00:11:50.565 00:11:50.565 ' 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.565 --rc genhtml_branch_coverage=1 00:11:50.565 --rc genhtml_function_coverage=1 00:11:50.565 --rc genhtml_legend=1 00:11:50.565 --rc geninfo_all_blocks=1 00:11:50.565 --rc geninfo_unexecuted_blocks=1 00:11:50.565 00:11:50.565 ' 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:50.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.565 --rc genhtml_branch_coverage=1 00:11:50.565 --rc genhtml_function_coverage=1 00:11:50.565 --rc genhtml_legend=1 00:11:50.565 --rc geninfo_all_blocks=1 00:11:50.565 --rc geninfo_unexecuted_blocks=1 00:11:50.565 00:11:50.565 ' 00:11:50.565 13:05:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:50.565 13:05:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59724 00:11:50.565 13:05:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:50.565 13:05:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:50.565 13:05:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59724 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59724 ']' 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.565 13:05:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:50.823 [2024-12-06 13:05:57.168759] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:50.823 [2024-12-06 13:05:57.169144] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59724 ] 00:11:50.823 [2024-12-06 13:05:57.347451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.081 [2024-12-06 13:05:57.469336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.081 [2024-12-06 13:05:57.469489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.081 [2024-12-06 13:05:57.469578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.081 [2024-12-06 13:05:57.469594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.012 13:05:58 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.012 13:05:58 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:11:52.012 13:05:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:52.012 13:05:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.012 13:05:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:52.012 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.012 POWER: Cannot set governor of lcore 0 to userspace 00:11:52.012 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.012 POWER: Cannot set governor of lcore 0 to performance 00:11:52.012 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.012 POWER: Cannot set governor of lcore 0 to userspace 00:11:52.012 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.012 POWER: Cannot set governor of lcore 0 to userspace 00:11:52.012 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:11:52.012 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:52.012 POWER: Unable to set Power Management Environment for lcore 0 00:11:52.012 [2024-12-06 13:05:58.243806] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:11:52.012 [2024-12-06 13:05:58.243875] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:11:52.012 [2024-12-06 13:05:58.243896] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:52.012 [2024-12-06 13:05:58.243919] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:52.012 [2024-12-06 13:05:58.243931] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:52.012 [2024-12-06 13:05:58.243945] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:52.012 13:05:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.012 13:05:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:52.012 13:05:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.012 13:05:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 [2024-12-06 13:05:58.543252] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:52.270 13:05:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.270 13:05:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:52.270 13:05:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:52.270 13:05:58 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.270 13:05:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 ************************************ 00:11:52.270 START TEST scheduler_create_thread 00:11:52.270 ************************************ 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 2 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 3 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 4 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 5 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 6 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 7 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 8 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 9 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.270 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 10 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.271 13:05:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:53.699 13:06:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.700 13:06:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:53.700 13:06:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:53.700 13:06:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.700 13:06:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:55.088 ************************************ 00:11:55.088 END TEST scheduler_create_thread 00:11:55.088 ************************************ 00:11:55.088 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.088 00:11:55.088 real 0m2.617s 00:11:55.088 user 0m0.017s 00:11:55.088 sys 0m0.006s 00:11:55.088 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.088 13:06:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:55.088 13:06:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:55.088 13:06:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59724 00:11:55.088 13:06:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59724 ']' 00:11:55.088 13:06:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59724 00:11:55.088 13:06:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:11:55.088 13:06:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.088 13:06:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59724 00:11:55.088 killing process with pid 59724 00:11:55.088 13:06:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:55.088 13:06:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:55.088 13:06:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59724' 00:11:55.088 13:06:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59724 00:11:55.088 13:06:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59724 00:11:55.346 [2024-12-06 13:06:01.653581] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:56.281 00:11:56.281 real 0m5.811s 00:11:56.281 user 0m10.668s 00:11:56.281 sys 0m0.430s 00:11:56.281 13:06:02 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.281 ************************************ 00:11:56.281 END TEST event_scheduler 00:11:56.281 ************************************ 00:11:56.281 13:06:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:56.281 13:06:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:56.281 13:06:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:56.281 13:06:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:56.281 13:06:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.281 13:06:02 event -- common/autotest_common.sh@10 -- # set +x 00:11:56.281 ************************************ 00:11:56.281 START TEST app_repeat 00:11:56.281 ************************************ 00:11:56.281 13:06:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:56.281 Process app_repeat pid: 59835 00:11:56.281 spdk_app_start Round 0 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59835 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59835' 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:56.281 13:06:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59835 /var/tmp/spdk-nbd.sock 00:11:56.281 13:06:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59835 ']' 00:11:56.281 13:06:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:56.281 13:06:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:56.281 13:06:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:56.281 13:06:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.281 13:06:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:56.539 [2024-12-06 13:06:02.809150] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:56.539 [2024-12-06 13:06:02.809523] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59835 ] 00:11:56.539 [2024-12-06 13:06:02.997876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:56.798 [2024-12-06 13:06:03.130592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.798 [2024-12-06 13:06:03.130600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.733 13:06:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.733 13:06:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:57.733 13:06:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:57.990 Malloc0 00:11:57.990 13:06:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:58.249 Malloc1 00:11:58.249 13:06:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.249 13:06:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:58.507 /dev/nbd0 00:11:58.507 13:06:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:58.507 13:06:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:58.507 1+0 records in 00:11:58.507 1+0 records out 00:11:58.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767222 s, 5.3 MB/s 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:58.507 13:06:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:58.507 13:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.507 13:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.507 13:06:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:58.777 /dev/nbd1 00:11:58.777 13:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:58.777 13:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:58.777 1+0 records in 00:11:58.777 1+0 records out 00:11:58.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374459 s, 10.9 MB/s 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:58.777 13:06:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:59.038 13:06:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:59.038 13:06:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:59.038 13:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.038 13:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:59.038 13:06:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:59.038 13:06:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.038 13:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:59.296 { 00:11:59.296 "nbd_device": "/dev/nbd0", 00:11:59.296 "bdev_name": "Malloc0" 00:11:59.296 }, 00:11:59.296 { 00:11:59.296 "nbd_device": "/dev/nbd1", 00:11:59.296 "bdev_name": "Malloc1" 00:11:59.296 } 00:11:59.296 ]' 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:59.296 { 00:11:59.296 "nbd_device": "/dev/nbd0", 00:11:59.296 "bdev_name": "Malloc0" 00:11:59.296 }, 00:11:59.296 { 00:11:59.296 "nbd_device": "/dev/nbd1", 00:11:59.296 "bdev_name": "Malloc1" 00:11:59.296 } 00:11:59.296 ]' 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:59.296 /dev/nbd1' 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:59.296 /dev/nbd1' 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:59.296 256+0 records in 00:11:59.296 256+0 records out 00:11:59.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00484027 s, 217 MB/s 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:59.296 256+0 records in 00:11:59.296 256+0 records out 00:11:59.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0331843 s, 31.6 MB/s 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:59.296 256+0 records in 00:11:59.296 256+0 records out 00:11:59.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313679 s, 33.4 MB/s 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.296 13:06:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:59.554 13:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:59.554 13:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:59.554 13:06:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:59.554 13:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.554 13:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.554 13:06:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:59.554 13:06:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:59.554 13:06:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.554 13:06:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.554 13:06:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:00.121 13:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:00.121 13:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:00.121 13:06:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:00.121 13:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.121 13:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.121 13:06:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:00.121 13:06:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:00.121 13:06:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.122 13:06:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:00.122 13:06:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.122 13:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:00.380 13:06:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:00.380 13:06:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:00.946 13:06:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:02.318 [2024-12-06 13:06:08.474524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:02.318 [2024-12-06 13:06:08.577881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.318 [2024-12-06 13:06:08.577892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.318 [2024-12-06 13:06:08.748234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:02.318 [2024-12-06 13:06:08.748326] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:04.217 spdk_app_start Round 1 00:12:04.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:04.217 13:06:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:04.217 13:06:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:04.217 13:06:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59835 /var/tmp/spdk-nbd.sock 00:12:04.217 13:06:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59835 ']' 00:12:04.217 13:06:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:04.217 13:06:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.217 13:06:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:04.217 13:06:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.217 13:06:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:04.217 13:06:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.217 13:06:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:04.217 13:06:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:04.475 Malloc0 00:12:04.475 13:06:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:05.038 Malloc1 00:12:05.038 13:06:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:05.038 13:06:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.038 13:06:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:05.038 13:06:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:05.038 13:06:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.039 13:06:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:05.603 /dev/nbd0 00:12:05.603 13:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:05.603 13:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:05.603 1+0 records in 00:12:05.603 1+0 records out 00:12:05.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343277 s, 11.9 MB/s 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:05.603 13:06:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:05.603 13:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.603 13:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.603 13:06:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:05.924 /dev/nbd1 00:12:05.924 13:06:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:05.924 13:06:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:05.924 1+0 records in 00:12:05.924 1+0 records out 00:12:05.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319726 s, 12.8 MB/s 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:05.924 13:06:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:05.924 13:06:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.924 13:06:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.924 13:06:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:05.924 13:06:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.924 13:06:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:06.182 { 00:12:06.182 "nbd_device": "/dev/nbd0", 00:12:06.182 "bdev_name": "Malloc0" 00:12:06.182 }, 00:12:06.182 { 00:12:06.182 "nbd_device": "/dev/nbd1", 00:12:06.182 "bdev_name": "Malloc1" 00:12:06.182 } 00:12:06.182 ]' 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:06.182 { 00:12:06.182 "nbd_device": "/dev/nbd0", 00:12:06.182 "bdev_name": "Malloc0" 00:12:06.182 }, 00:12:06.182 { 00:12:06.182 "nbd_device": "/dev/nbd1", 00:12:06.182 "bdev_name": "Malloc1" 00:12:06.182 } 00:12:06.182 ]' 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:06.182 /dev/nbd1' 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:06.182 /dev/nbd1' 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:06.182 256+0 records in 00:12:06.182 256+0 records out 00:12:06.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00884303 s, 119 MB/s 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:06.182 256+0 records in 00:12:06.182 256+0 records out 00:12:06.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311436 s, 33.7 MB/s 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:06.182 13:06:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:06.441 256+0 records in 00:12:06.441 256+0 records out 00:12:06.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0483227 s, 21.7 MB/s 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.441 13:06:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:06.699 13:06:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.700 13:06:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.700 13:06:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.700 13:06:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.700 13:06:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.700 13:06:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.700 13:06:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:06.700 13:06:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.700 13:06:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.700 13:06:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.958 13:06:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:07.525 13:06:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:07.525 13:06:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:08.092 13:06:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:09.025 [2024-12-06 13:06:15.390074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:09.025 [2024-12-06 13:06:15.489178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.025 [2024-12-06 13:06:15.489180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.282 [2024-12-06 13:06:15.655087] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:09.282 [2024-12-06 13:06:15.655198] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:11.182 spdk_app_start Round 2 00:12:11.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:11.182 13:06:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:11.182 13:06:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:11.182 13:06:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59835 /var/tmp/spdk-nbd.sock 00:12:11.182 13:06:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59835 ']' 00:12:11.182 13:06:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:11.182 13:06:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.182 13:06:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:11.182 13:06:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.182 13:06:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:11.182 13:06:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.182 13:06:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:11.182 13:06:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:11.747 Malloc0 00:12:11.747 13:06:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:12.005 Malloc1 00:12:12.005 13:06:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:12.005 13:06:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.005 13:06:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:12.005 13:06:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:12.006 13:06:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:12.265 /dev/nbd0 00:12:12.265 13:06:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:12.265 13:06:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:12.265 1+0 records in 00:12:12.265 1+0 records out 00:12:12.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322307 s, 12.7 MB/s 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:12.265 13:06:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:12.265 13:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.265 13:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:12.265 13:06:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:12.524 /dev/nbd1 00:12:12.524 13:06:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:12.524 13:06:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:12.524 13:06:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:12.524 13:06:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:12.524 13:06:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:12.524 13:06:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:12.524 13:06:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:12.524 13:06:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:12.524 13:06:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:12.524 13:06:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:12.524 13:06:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:12.782 1+0 records in 00:12:12.782 1+0 records out 00:12:12.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373415 s, 11.0 MB/s 00:12:12.782 13:06:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:12.782 13:06:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:12.782 13:06:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:12.782 13:06:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:12.782 13:06:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:12.782 13:06:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.782 13:06:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:12.782 13:06:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:12.782 13:06:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.782 13:06:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:13.040 { 00:12:13.040 "nbd_device": "/dev/nbd0", 00:12:13.040 "bdev_name": "Malloc0" 00:12:13.040 }, 00:12:13.040 { 00:12:13.040 "nbd_device": "/dev/nbd1", 00:12:13.040 "bdev_name": "Malloc1" 00:12:13.040 } 00:12:13.040 ]' 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:13.040 { 00:12:13.040 "nbd_device": "/dev/nbd0", 00:12:13.040 "bdev_name": "Malloc0" 00:12:13.040 }, 00:12:13.040 { 00:12:13.040 "nbd_device": "/dev/nbd1", 00:12:13.040 "bdev_name": "Malloc1" 00:12:13.040 } 00:12:13.040 ]' 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:13.040 /dev/nbd1' 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:13.040 /dev/nbd1' 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:13.040 256+0 records in 00:12:13.040 256+0 records out 00:12:13.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692923 s, 151 MB/s 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:13.040 256+0 records in 00:12:13.040 256+0 records out 00:12:13.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0422094 s, 24.8 MB/s 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:13.040 256+0 records in 00:12:13.040 256+0 records out 00:12:13.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0381059 s, 27.5 MB/s 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.040 13:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.041 13:06:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:13.607 13:06:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:13.607 13:06:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:13.607 13:06:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:13.607 13:06:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.607 13:06:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.607 13:06:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:13.607 13:06:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:13.607 13:06:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.607 13:06:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.607 13:06:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:13.864 13:06:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:14.122 13:06:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:14.122 13:06:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:14.695 13:06:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:15.636 [2024-12-06 13:06:22.133920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:15.895 [2024-12-06 13:06:22.234357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.895 [2024-12-06 13:06:22.234368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.895 [2024-12-06 13:06:22.402019] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:15.895 [2024-12-06 13:06:22.402113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:17.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:17.794 13:06:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59835 /var/tmp/spdk-nbd.sock 00:12:17.794 13:06:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59835 ']' 00:12:17.794 13:06:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:17.794 13:06:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.794 13:06:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:17.794 13:06:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.794 13:06:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:18.050 13:06:24 event.app_repeat -- event/event.sh@39 -- # killprocess 59835 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59835 ']' 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59835 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59835 00:12:18.050 killing process with pid 59835 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59835' 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59835 00:12:18.050 13:06:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59835 00:12:19.420 spdk_app_start is called in Round 0. 00:12:19.420 Shutdown signal received, stop current app iteration 00:12:19.420 Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 reinitialization... 00:12:19.420 spdk_app_start is called in Round 1. 00:12:19.420 Shutdown signal received, stop current app iteration 00:12:19.420 Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 reinitialization... 00:12:19.420 spdk_app_start is called in Round 2. 00:12:19.420 Shutdown signal received, stop current app iteration 00:12:19.420 Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 reinitialization... 00:12:19.420 spdk_app_start is called in Round 3. 00:12:19.420 Shutdown signal received, stop current app iteration 00:12:19.420 13:06:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:19.420 13:06:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:19.420 00:12:19.420 real 0m22.860s 00:12:19.420 user 0m51.627s 00:12:19.420 sys 0m2.943s 00:12:19.420 13:06:25 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.420 ************************************ 00:12:19.420 END TEST app_repeat 00:12:19.420 ************************************ 00:12:19.420 13:06:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 13:06:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:19.420 13:06:25 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:19.420 13:06:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.420 13:06:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.420 13:06:25 event -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 ************************************ 00:12:19.420 START TEST cpu_locks 00:12:19.420 ************************************ 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:19.420 * Looking for test storage... 00:12:19.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.420 13:06:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.420 --rc genhtml_branch_coverage=1 00:12:19.420 --rc genhtml_function_coverage=1 00:12:19.420 --rc genhtml_legend=1 00:12:19.420 --rc geninfo_all_blocks=1 00:12:19.420 --rc geninfo_unexecuted_blocks=1 00:12:19.420 00:12:19.420 ' 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.420 --rc genhtml_branch_coverage=1 00:12:19.420 --rc genhtml_function_coverage=1 00:12:19.420 --rc genhtml_legend=1 00:12:19.420 --rc geninfo_all_blocks=1 00:12:19.420 --rc geninfo_unexecuted_blocks=1 00:12:19.420 00:12:19.420 ' 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.420 --rc genhtml_branch_coverage=1 00:12:19.420 --rc genhtml_function_coverage=1 00:12:19.420 --rc genhtml_legend=1 00:12:19.420 --rc geninfo_all_blocks=1 00:12:19.420 --rc geninfo_unexecuted_blocks=1 00:12:19.420 00:12:19.420 ' 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.420 --rc genhtml_branch_coverage=1 00:12:19.420 --rc genhtml_function_coverage=1 00:12:19.420 --rc genhtml_legend=1 00:12:19.420 --rc geninfo_all_blocks=1 00:12:19.420 --rc geninfo_unexecuted_blocks=1 00:12:19.420 00:12:19.420 ' 00:12:19.420 13:06:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:19.420 13:06:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:19.420 13:06:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:19.420 13:06:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.420 13:06:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 ************************************ 00:12:19.420 START TEST default_locks 00:12:19.420 ************************************ 00:12:19.420 13:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:12:19.420 13:06:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60323 00:12:19.421 13:06:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:19.421 13:06:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60323 00:12:19.421 13:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60323 ']' 00:12:19.421 13:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.421 13:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.421 13:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.421 13:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.421 13:06:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:19.421 [2024-12-06 13:06:25.939530] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:19.421 [2024-12-06 13:06:25.939689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60323 ] 00:12:19.678 [2024-12-06 13:06:26.114303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.936 [2024-12-06 13:06:26.217058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.503 13:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.504 13:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:12:20.504 13:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60323 00:12:20.504 13:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60323 00:12:20.504 13:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:21.070 13:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60323 00:12:21.070 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60323 ']' 00:12:21.070 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60323 00:12:21.070 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:12:21.070 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.071 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60323 00:12:21.071 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.071 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.071 killing process with pid 60323 00:12:21.071 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60323' 00:12:21.071 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60323 00:12:21.071 13:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60323 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60323 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60323 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60323 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60323 ']' 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:22.985 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60323) - No such process 00:12:22.985 ERROR: process (pid: 60323) is no longer running 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:12:22.985 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:12:22.986 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:22.986 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:22.986 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:22.986 13:06:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:22.986 13:06:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:22.986 13:06:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:22.986 13:06:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:22.986 00:12:22.986 real 0m3.658s 00:12:22.986 user 0m3.841s 00:12:22.986 sys 0m0.567s 00:12:22.986 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.986 ************************************ 00:12:22.986 END TEST default_locks 00:12:22.986 13:06:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:22.986 ************************************ 00:12:23.256 13:06:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:23.256 13:06:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.256 13:06:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.256 13:06:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:23.256 ************************************ 00:12:23.256 START TEST default_locks_via_rpc 00:12:23.256 ************************************ 00:12:23.256 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:12:23.256 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60393 00:12:23.256 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:23.256 13:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60393 00:12:23.256 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60393 ']' 00:12:23.256 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.256 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.256 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.256 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.256 13:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.256 [2024-12-06 13:06:29.711467] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:23.256 [2024-12-06 13:06:29.711634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60393 ] 00:12:23.514 [2024-12-06 13:06:29.894538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.514 [2024-12-06 13:06:29.996661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60393 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60393 00:12:24.451 13:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:24.710 13:06:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60393 00:12:24.710 13:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60393 ']' 00:12:24.710 13:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60393 00:12:24.710 13:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:12:24.710 13:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.710 13:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60393 00:12:24.967 13:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.967 13:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.967 killing process with pid 60393 00:12:24.967 13:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60393' 00:12:24.967 13:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60393 00:12:24.968 13:06:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60393 00:12:26.871 00:12:26.871 real 0m3.830s 00:12:26.871 user 0m4.057s 00:12:26.871 sys 0m0.637s 00:12:26.871 13:06:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.871 13:06:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.871 ************************************ 00:12:26.871 END TEST default_locks_via_rpc 00:12:26.871 ************************************ 00:12:27.129 13:06:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:27.129 13:06:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:27.129 13:06:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.129 13:06:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:27.129 ************************************ 00:12:27.129 START TEST non_locking_app_on_locked_coremask 00:12:27.129 ************************************ 00:12:27.129 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:12:27.129 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60467 00:12:27.129 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:27.129 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60467 /var/tmp/spdk.sock 00:12:27.129 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60467 ']' 00:12:27.129 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.129 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.129 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.129 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.129 13:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:27.129 [2024-12-06 13:06:33.521783] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:27.129 [2024-12-06 13:06:33.521969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60467 ] 00:12:27.386 [2024-12-06 13:06:33.705214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.386 [2024-12-06 13:06:33.820710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60483 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60483 /var/tmp/spdk2.sock 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60483 ']' 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.320 13:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:28.320 [2024-12-06 13:06:34.727696] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:28.320 [2024-12-06 13:06:34.727891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60483 ] 00:12:28.577 [2024-12-06 13:06:34.934153] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:28.577 [2024-12-06 13:06:34.934246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.835 [2024-12-06 13:06:35.140414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.489 13:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.489 13:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:31.489 13:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60467 00:12:31.489 13:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60467 00:12:31.489 13:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:31.748 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60467 00:12:31.748 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60467 ']' 00:12:31.748 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60467 00:12:31.748 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:31.748 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.748 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60467 00:12:32.006 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.006 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.006 killing process with pid 60467 00:12:32.006 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60467' 00:12:32.006 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60467 00:12:32.006 13:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60467 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60483 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60483 ']' 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60483 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60483 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.191 killing process with pid 60483 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60483' 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60483 00:12:36.191 13:06:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60483 00:12:38.131 00:12:38.131 real 0m11.168s 00:12:38.131 user 0m12.044s 00:12:38.131 sys 0m1.198s 00:12:38.131 13:06:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.131 13:06:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:38.131 ************************************ 00:12:38.131 END TEST non_locking_app_on_locked_coremask 00:12:38.131 ************************************ 00:12:38.131 13:06:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:38.131 13:06:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:38.131 13:06:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.131 13:06:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:38.131 ************************************ 00:12:38.131 START TEST locking_app_on_unlocked_coremask 00:12:38.131 ************************************ 00:12:38.131 13:06:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:12:38.131 13:06:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60631 00:12:38.131 13:06:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:38.131 13:06:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60631 /var/tmp/spdk.sock 00:12:38.131 13:06:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60631 ']' 00:12:38.131 13:06:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.131 13:06:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.131 13:06:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.131 13:06:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.131 13:06:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:38.389 [2024-12-06 13:06:44.772767] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:38.389 [2024-12-06 13:06:44.772992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60631 ] 00:12:38.663 [2024-12-06 13:06:45.007819] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:38.663 [2024-12-06 13:06:45.007936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.663 [2024-12-06 13:06:45.129061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60647 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60647 /var/tmp/spdk2.sock 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60647 ']' 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.596 13:06:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:39.596 [2024-12-06 13:06:46.054142] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:39.596 [2024-12-06 13:06:46.054568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:12:39.854 [2024-12-06 13:06:46.263571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.111 [2024-12-06 13:06:46.486600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.755 13:06:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.755 13:06:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:42.755 13:06:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60647 00:12:42.755 13:06:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60647 00:12:42.755 13:06:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60631 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60631 ']' 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60631 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60631 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.320 killing process with pid 60631 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60631' 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60631 00:12:43.320 13:06:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60631 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60647 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60647 ']' 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60647 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60647 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.580 killing process with pid 60647 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60647' 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60647 00:12:48.580 13:06:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60647 00:12:49.951 00:12:49.951 real 0m11.589s 00:12:49.951 user 0m12.408s 00:12:49.951 sys 0m1.295s 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.951 ************************************ 00:12:49.951 END TEST locking_app_on_unlocked_coremask 00:12:49.951 ************************************ 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:49.951 13:06:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:49.951 13:06:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:49.951 13:06:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.951 13:06:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:49.951 ************************************ 00:12:49.951 START TEST locking_app_on_locked_coremask 00:12:49.951 ************************************ 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60795 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60795 /var/tmp/spdk.sock 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60795 ']' 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.951 13:06:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:49.951 [2024-12-06 13:06:56.378996] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:49.951 [2024-12-06 13:06:56.379165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60795 ] 00:12:50.210 [2024-12-06 13:06:56.553274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.210 [2024-12-06 13:06:56.658300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.143 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.143 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:51.143 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60811 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60811 /var/tmp/spdk2.sock 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60811 /var/tmp/spdk2.sock 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60811 /var/tmp/spdk2.sock 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60811 ']' 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.144 13:06:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:51.144 [2024-12-06 13:06:57.641588] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:51.144 [2024-12-06 13:06:57.641801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60811 ] 00:12:51.402 [2024-12-06 13:06:57.853391] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60795 has claimed it. 00:12:51.402 [2024-12-06 13:06:57.853501] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:51.968 ERROR: process (pid: 60811) is no longer running 00:12:51.968 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60811) - No such process 00:12:51.968 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.968 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:51.968 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:51.968 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:51.968 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:51.968 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:51.968 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60795 00:12:51.968 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60795 00:12:51.968 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60795 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60795 ']' 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60795 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60795 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.534 killing process with pid 60795 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60795' 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60795 00:12:52.534 13:06:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60795 00:12:55.057 00:12:55.057 real 0m4.722s 00:12:55.057 user 0m5.221s 00:12:55.057 sys 0m0.834s 00:12:55.057 13:07:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.057 13:07:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:55.057 ************************************ 00:12:55.057 END TEST locking_app_on_locked_coremask 00:12:55.057 ************************************ 00:12:55.057 13:07:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:55.057 13:07:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.057 13:07:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.057 13:07:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:55.057 ************************************ 00:12:55.057 START TEST locking_overlapped_coremask 00:12:55.057 ************************************ 00:12:55.057 13:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:12:55.057 13:07:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60881 00:12:55.057 13:07:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:55.057 13:07:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60881 /var/tmp/spdk.sock 00:12:55.057 13:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60881 ']' 00:12:55.057 13:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.057 13:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.057 13:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.057 13:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.057 13:07:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:55.057 [2024-12-06 13:07:01.144620] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:55.057 [2024-12-06 13:07:01.144768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60881 ] 00:12:55.057 [2024-12-06 13:07:01.319588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:55.057 [2024-12-06 13:07:01.426868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.057 [2024-12-06 13:07:01.426926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.057 [2024-12-06 13:07:01.426962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60904 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60904 /var/tmp/spdk2.sock 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60904 /var/tmp/spdk2.sock 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60904 /var/tmp/spdk2.sock 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60904 ']' 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.990 13:07:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:55.990 [2024-12-06 13:07:02.349740] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:55.990 [2024-12-06 13:07:02.349988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60904 ] 00:12:56.248 [2024-12-06 13:07:02.569467] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60881 has claimed it. 00:12:56.248 [2024-12-06 13:07:02.569564] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:56.814 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60904) - No such process 00:12:56.814 ERROR: process (pid: 60904) is no longer running 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60881 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60881 ']' 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60881 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60881 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.814 killing process with pid 60881 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60881' 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60881 00:12:56.814 13:07:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60881 00:12:59.344 00:12:59.344 real 0m4.230s 00:12:59.344 user 0m11.829s 00:12:59.344 sys 0m0.560s 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.344 ************************************ 00:12:59.344 END TEST locking_overlapped_coremask 00:12:59.344 ************************************ 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:59.344 13:07:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:59.344 13:07:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:59.344 13:07:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.344 13:07:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:59.344 ************************************ 00:12:59.344 START TEST locking_overlapped_coremask_via_rpc 00:12:59.344 ************************************ 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60967 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60967 /var/tmp/spdk.sock 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60967 ']' 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.344 13:07:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.344 [2024-12-06 13:07:05.439857] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:59.344 [2024-12-06 13:07:05.440072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60967 ] 00:12:59.344 [2024-12-06 13:07:05.659591] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:59.344 [2024-12-06 13:07:05.659695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:59.344 [2024-12-06 13:07:05.765767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.344 [2024-12-06 13:07:05.765891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.344 [2024-12-06 13:07:05.765906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60986 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60986 /var/tmp/spdk2.sock 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60986 ']' 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.276 13:07:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.276 [2024-12-06 13:07:06.693347] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:00.276 [2024-12-06 13:07:06.693566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60986 ] 00:13:00.535 [2024-12-06 13:07:06.906810] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:00.535 [2024-12-06 13:07:06.906908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.792 [2024-12-06 13:07:07.124925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.792 [2024-12-06 13:07:07.128941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.792 [2024-12-06 13:07:07.128945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.216 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.474 [2024-12-06 13:07:08.747142] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60967 has claimed it. 00:13:02.474 request: 00:13:02.474 { 00:13:02.474 "method": "framework_enable_cpumask_locks", 00:13:02.474 "req_id": 1 00:13:02.474 } 00:13:02.474 Got JSON-RPC error response 00:13:02.474 response: 00:13:02.474 { 00:13:02.474 "code": -32603, 00:13:02.474 "message": "Failed to claim CPU core: 2" 00:13:02.474 } 00:13:02.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60967 /var/tmp/spdk.sock 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60967 ']' 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.474 13:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:02.732 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.732 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:02.732 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60986 /var/tmp/spdk2.sock 00:13:02.732 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60986 ']' 00:13:02.732 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:02.732 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.732 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:02.732 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.732 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.990 ************************************ 00:13:02.990 END TEST locking_overlapped_coremask_via_rpc 00:13:02.990 ************************************ 00:13:02.990 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.990 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:02.990 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:02.990 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:02.990 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:02.990 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:02.990 00:13:02.990 real 0m4.155s 00:13:02.990 user 0m1.859s 00:13:02.990 sys 0m0.208s 00:13:02.990 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.990 13:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.990 13:07:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:02.990 13:07:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60967 ]] 00:13:02.990 13:07:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60967 00:13:02.990 13:07:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60967 ']' 00:13:02.990 13:07:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60967 00:13:02.990 13:07:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:13:02.990 13:07:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.991 13:07:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60967 00:13:03.250 killing process with pid 60967 00:13:03.250 13:07:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.250 13:07:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.250 13:07:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60967' 00:13:03.250 13:07:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60967 00:13:03.250 13:07:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60967 00:13:05.779 13:07:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60986 ]] 00:13:05.779 13:07:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60986 00:13:05.779 13:07:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60986 ']' 00:13:05.779 13:07:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60986 00:13:05.779 13:07:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:13:05.779 13:07:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.779 13:07:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60986 00:13:05.779 killing process with pid 60986 00:13:05.779 13:07:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:05.779 13:07:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:05.779 13:07:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60986' 00:13:05.779 13:07:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60986 00:13:05.779 13:07:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60986 00:13:07.680 13:07:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:07.680 13:07:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:07.680 13:07:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60967 ]] 00:13:07.680 13:07:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60967 00:13:07.680 13:07:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60967 ']' 00:13:07.680 13:07:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60967 00:13:07.680 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60967) - No such process 00:13:07.680 Process with pid 60967 is not found 00:13:07.680 13:07:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60967 is not found' 00:13:07.680 13:07:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60986 ]] 00:13:07.680 13:07:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60986 00:13:07.680 13:07:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60986 ']' 00:13:07.680 13:07:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60986 00:13:07.680 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60986) - No such process 00:13:07.680 Process with pid 60986 is not found 00:13:07.680 13:07:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60986 is not found' 00:13:07.680 13:07:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:07.680 00:13:07.680 real 0m48.234s 00:13:07.680 user 1m23.809s 00:13:07.680 sys 0m6.273s 00:13:07.680 13:07:13 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.680 13:07:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 ************************************ 00:13:07.680 END TEST cpu_locks 00:13:07.680 ************************************ 00:13:07.680 00:13:07.680 real 1m22.072s 00:13:07.680 user 2m33.376s 00:13:07.680 sys 0m10.167s 00:13:07.680 13:07:13 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.680 ************************************ 00:13:07.680 13:07:13 event -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 END TEST event 00:13:07.680 ************************************ 00:13:07.680 13:07:13 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:07.680 13:07:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.680 13:07:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.680 13:07:13 -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 ************************************ 00:13:07.680 START TEST thread 00:13:07.680 ************************************ 00:13:07.680 13:07:13 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:07.680 * Looking for test storage... 00:13:07.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:07.680 13:07:14 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:07.680 13:07:14 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:07.680 13:07:14 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:13:07.680 13:07:14 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:07.680 13:07:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.680 13:07:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.680 13:07:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.680 13:07:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.680 13:07:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.680 13:07:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.680 13:07:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.680 13:07:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.680 13:07:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.680 13:07:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.680 13:07:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.680 13:07:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:13:07.680 13:07:14 thread -- scripts/common.sh@345 -- # : 1 00:13:07.680 13:07:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.680 13:07:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.680 13:07:14 thread -- scripts/common.sh@365 -- # decimal 1 00:13:07.680 13:07:14 thread -- scripts/common.sh@353 -- # local d=1 00:13:07.680 13:07:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.680 13:07:14 thread -- scripts/common.sh@355 -- # echo 1 00:13:07.680 13:07:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.680 13:07:14 thread -- scripts/common.sh@366 -- # decimal 2 00:13:07.680 13:07:14 thread -- scripts/common.sh@353 -- # local d=2 00:13:07.680 13:07:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.680 13:07:14 thread -- scripts/common.sh@355 -- # echo 2 00:13:07.680 13:07:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.680 13:07:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.680 13:07:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.680 13:07:14 thread -- scripts/common.sh@368 -- # return 0 00:13:07.680 13:07:14 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.681 13:07:14 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:07.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.681 --rc genhtml_branch_coverage=1 00:13:07.681 --rc genhtml_function_coverage=1 00:13:07.681 --rc genhtml_legend=1 00:13:07.681 --rc geninfo_all_blocks=1 00:13:07.681 --rc geninfo_unexecuted_blocks=1 00:13:07.681 00:13:07.681 ' 00:13:07.681 13:07:14 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:07.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.681 --rc genhtml_branch_coverage=1 00:13:07.681 --rc genhtml_function_coverage=1 00:13:07.681 --rc genhtml_legend=1 00:13:07.681 --rc geninfo_all_blocks=1 00:13:07.681 --rc geninfo_unexecuted_blocks=1 00:13:07.681 00:13:07.681 ' 00:13:07.681 13:07:14 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:07.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.681 --rc genhtml_branch_coverage=1 00:13:07.681 --rc genhtml_function_coverage=1 00:13:07.681 --rc genhtml_legend=1 00:13:07.681 --rc geninfo_all_blocks=1 00:13:07.681 --rc geninfo_unexecuted_blocks=1 00:13:07.681 00:13:07.681 ' 00:13:07.681 13:07:14 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:07.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.681 --rc genhtml_branch_coverage=1 00:13:07.681 --rc genhtml_function_coverage=1 00:13:07.681 --rc genhtml_legend=1 00:13:07.681 --rc geninfo_all_blocks=1 00:13:07.681 --rc geninfo_unexecuted_blocks=1 00:13:07.681 00:13:07.681 ' 00:13:07.681 13:07:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:07.681 13:07:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:13:07.681 13:07:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.681 13:07:14 thread -- common/autotest_common.sh@10 -- # set +x 00:13:07.681 ************************************ 00:13:07.681 START TEST thread_poller_perf 00:13:07.681 ************************************ 00:13:07.681 13:07:14 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:07.681 [2024-12-06 13:07:14.178258] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:07.681 [2024-12-06 13:07:14.178443] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61174 ] 00:13:07.938 [2024-12-06 13:07:14.417569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.196 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:08.196 [2024-12-06 13:07:14.530899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.582 [2024-12-06T13:07:16.110Z] ====================================== 00:13:09.582 [2024-12-06T13:07:16.110Z] busy:2212905270 (cyc) 00:13:09.582 [2024-12-06T13:07:16.110Z] total_run_count: 292000 00:13:09.582 [2024-12-06T13:07:16.110Z] tsc_hz: 2200000000 (cyc) 00:13:09.582 [2024-12-06T13:07:16.110Z] ====================================== 00:13:09.582 [2024-12-06T13:07:16.110Z] poller_cost: 7578 (cyc), 3444 (nsec) 00:13:09.582 00:13:09.582 real 0m1.635s 00:13:09.582 user 0m1.437s 00:13:09.582 sys 0m0.088s 00:13:09.582 13:07:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.582 ************************************ 00:13:09.582 END TEST thread_poller_perf 00:13:09.582 ************************************ 00:13:09.582 13:07:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:09.582 13:07:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:09.582 13:07:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:13:09.582 13:07:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.582 13:07:15 thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.582 ************************************ 00:13:09.582 START TEST thread_poller_perf 00:13:09.582 ************************************ 00:13:09.582 13:07:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:09.582 [2024-12-06 13:07:15.850358] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:09.582 [2024-12-06 13:07:15.850504] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61209 ] 00:13:09.582 [2024-12-06 13:07:16.028363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.840 [2024-12-06 13:07:16.166515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.840 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:11.291 [2024-12-06T13:07:17.819Z] ====================================== 00:13:11.291 [2024-12-06T13:07:17.819Z] busy:2204815491 (cyc) 00:13:11.291 [2024-12-06T13:07:17.819Z] total_run_count: 3080000 00:13:11.291 [2024-12-06T13:07:17.819Z] tsc_hz: 2200000000 (cyc) 00:13:11.291 [2024-12-06T13:07:17.819Z] ====================================== 00:13:11.291 [2024-12-06T13:07:17.819Z] poller_cost: 715 (cyc), 325 (nsec) 00:13:11.291 00:13:11.291 real 0m1.583s 00:13:11.291 user 0m1.381s 00:13:11.291 sys 0m0.091s 00:13:11.291 13:07:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.291 13:07:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:11.291 ************************************ 00:13:11.291 END TEST thread_poller_perf 00:13:11.291 ************************************ 00:13:11.291 13:07:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:11.291 00:13:11.291 real 0m3.470s 00:13:11.291 user 0m2.952s 00:13:11.291 sys 0m0.296s 00:13:11.291 13:07:17 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.291 13:07:17 thread -- common/autotest_common.sh@10 -- # set +x 00:13:11.291 ************************************ 00:13:11.291 END TEST thread 00:13:11.291 ************************************ 00:13:11.291 13:07:17 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:13:11.291 13:07:17 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:11.291 13:07:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:11.291 13:07:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.291 13:07:17 -- common/autotest_common.sh@10 -- # set +x 00:13:11.291 ************************************ 00:13:11.291 START TEST app_cmdline 00:13:11.291 ************************************ 00:13:11.291 13:07:17 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:11.291 * Looking for test storage... 00:13:11.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:11.291 13:07:17 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:11.291 13:07:17 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:13:11.291 13:07:17 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:11.291 13:07:17 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:11.291 13:07:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.291 13:07:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.292 13:07:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:11.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.292 --rc genhtml_branch_coverage=1 00:13:11.292 --rc genhtml_function_coverage=1 00:13:11.292 --rc genhtml_legend=1 00:13:11.292 --rc geninfo_all_blocks=1 00:13:11.292 --rc geninfo_unexecuted_blocks=1 00:13:11.292 00:13:11.292 ' 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:11.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.292 --rc genhtml_branch_coverage=1 00:13:11.292 --rc genhtml_function_coverage=1 00:13:11.292 --rc genhtml_legend=1 00:13:11.292 --rc geninfo_all_blocks=1 00:13:11.292 --rc geninfo_unexecuted_blocks=1 00:13:11.292 00:13:11.292 ' 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:11.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.292 --rc genhtml_branch_coverage=1 00:13:11.292 --rc genhtml_function_coverage=1 00:13:11.292 --rc genhtml_legend=1 00:13:11.292 --rc geninfo_all_blocks=1 00:13:11.292 --rc geninfo_unexecuted_blocks=1 00:13:11.292 00:13:11.292 ' 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:11.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.292 --rc genhtml_branch_coverage=1 00:13:11.292 --rc genhtml_function_coverage=1 00:13:11.292 --rc genhtml_legend=1 00:13:11.292 --rc geninfo_all_blocks=1 00:13:11.292 --rc geninfo_unexecuted_blocks=1 00:13:11.292 00:13:11.292 ' 00:13:11.292 13:07:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:11.292 13:07:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61294 00:13:11.292 13:07:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:11.292 13:07:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61294 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61294 ']' 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.292 13:07:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:11.292 [2024-12-06 13:07:17.773407] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:11.292 [2024-12-06 13:07:17.773590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61294 ] 00:13:11.549 [2024-12-06 13:07:18.000145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.808 [2024-12-06 13:07:18.114226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.372 13:07:18 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.372 13:07:18 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:13:12.372 13:07:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:12.939 { 00:13:12.939 "version": "SPDK v25.01-pre git sha1 cf089b398", 00:13:12.939 "fields": { 00:13:12.939 "major": 25, 00:13:12.939 "minor": 1, 00:13:12.939 "patch": 0, 00:13:12.939 "suffix": "-pre", 00:13:12.939 "commit": "cf089b398" 00:13:12.939 } 00:13:12.939 } 00:13:12.939 13:07:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:12.939 13:07:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:12.939 13:07:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:12.939 13:07:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:12.939 13:07:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:12.939 13:07:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:12.939 13:07:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.939 13:07:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:12.939 13:07:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:12.939 13:07:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.940 13:07:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:12.940 13:07:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:12.940 13:07:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:12.940 13:07:19 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:13.198 request: 00:13:13.198 { 00:13:13.198 "method": "env_dpdk_get_mem_stats", 00:13:13.198 "req_id": 1 00:13:13.198 } 00:13:13.198 Got JSON-RPC error response 00:13:13.198 response: 00:13:13.198 { 00:13:13.198 "code": -32601, 00:13:13.198 "message": "Method not found" 00:13:13.198 } 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.198 13:07:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61294 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61294 ']' 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61294 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61294 00:13:13.198 killing process with pid 61294 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61294' 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 61294 00:13:13.198 13:07:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 61294 00:13:15.728 ************************************ 00:13:15.728 END TEST app_cmdline 00:13:15.728 ************************************ 00:13:15.728 00:13:15.728 real 0m4.235s 00:13:15.728 user 0m4.938s 00:13:15.728 sys 0m0.532s 00:13:15.728 13:07:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.728 13:07:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:15.728 13:07:21 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:15.728 13:07:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:15.728 13:07:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.728 13:07:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.728 ************************************ 00:13:15.728 START TEST version 00:13:15.728 ************************************ 00:13:15.728 13:07:21 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:15.728 * Looking for test storage... 00:13:15.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:15.728 13:07:21 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:15.728 13:07:21 version -- common/autotest_common.sh@1711 -- # lcov --version 00:13:15.728 13:07:21 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.728 13:07:21 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.728 13:07:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.728 13:07:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.728 13:07:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.728 13:07:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.728 13:07:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.728 13:07:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.728 13:07:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.728 13:07:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.728 13:07:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.728 13:07:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.728 13:07:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.728 13:07:21 version -- scripts/common.sh@344 -- # case "$op" in 00:13:15.728 13:07:21 version -- scripts/common.sh@345 -- # : 1 00:13:15.728 13:07:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.728 13:07:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.728 13:07:21 version -- scripts/common.sh@365 -- # decimal 1 00:13:15.728 13:07:21 version -- scripts/common.sh@353 -- # local d=1 00:13:15.728 13:07:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.728 13:07:21 version -- scripts/common.sh@355 -- # echo 1 00:13:15.728 13:07:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.728 13:07:21 version -- scripts/common.sh@366 -- # decimal 2 00:13:15.728 13:07:21 version -- scripts/common.sh@353 -- # local d=2 00:13:15.728 13:07:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.728 13:07:21 version -- scripts/common.sh@355 -- # echo 2 00:13:15.728 13:07:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.728 13:07:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.728 13:07:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.728 13:07:21 version -- scripts/common.sh@368 -- # return 0 00:13:15.728 13:07:21 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.728 13:07:21 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.728 --rc genhtml_branch_coverage=1 00:13:15.728 --rc genhtml_function_coverage=1 00:13:15.728 --rc genhtml_legend=1 00:13:15.728 --rc geninfo_all_blocks=1 00:13:15.728 --rc geninfo_unexecuted_blocks=1 00:13:15.728 00:13:15.728 ' 00:13:15.728 13:07:21 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.728 --rc genhtml_branch_coverage=1 00:13:15.728 --rc genhtml_function_coverage=1 00:13:15.728 --rc genhtml_legend=1 00:13:15.728 --rc geninfo_all_blocks=1 00:13:15.728 --rc geninfo_unexecuted_blocks=1 00:13:15.728 00:13:15.728 ' 00:13:15.728 13:07:21 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.728 --rc genhtml_branch_coverage=1 00:13:15.728 --rc genhtml_function_coverage=1 00:13:15.728 --rc genhtml_legend=1 00:13:15.728 --rc geninfo_all_blocks=1 00:13:15.728 --rc geninfo_unexecuted_blocks=1 00:13:15.728 00:13:15.728 ' 00:13:15.728 13:07:21 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.728 --rc genhtml_branch_coverage=1 00:13:15.728 --rc genhtml_function_coverage=1 00:13:15.728 --rc genhtml_legend=1 00:13:15.728 --rc geninfo_all_blocks=1 00:13:15.728 --rc geninfo_unexecuted_blocks=1 00:13:15.728 00:13:15.728 ' 00:13:15.728 13:07:21 version -- app/version.sh@17 -- # get_header_version major 00:13:15.728 13:07:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:15.728 13:07:21 version -- app/version.sh@14 -- # cut -f2 00:13:15.728 13:07:21 version -- app/version.sh@14 -- # tr -d '"' 00:13:15.728 13:07:21 version -- app/version.sh@17 -- # major=25 00:13:15.728 13:07:21 version -- app/version.sh@18 -- # get_header_version minor 00:13:15.728 13:07:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:15.728 13:07:21 version -- app/version.sh@14 -- # tr -d '"' 00:13:15.728 13:07:21 version -- app/version.sh@14 -- # cut -f2 00:13:15.728 13:07:21 version -- app/version.sh@18 -- # minor=1 00:13:15.728 13:07:21 version -- app/version.sh@19 -- # get_header_version patch 00:13:15.728 13:07:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:15.728 13:07:21 version -- app/version.sh@14 -- # cut -f2 00:13:15.728 13:07:21 version -- app/version.sh@14 -- # tr -d '"' 00:13:15.728 13:07:21 version -- app/version.sh@19 -- # patch=0 00:13:15.728 13:07:21 version -- app/version.sh@20 -- # get_header_version suffix 00:13:15.728 13:07:21 version -- app/version.sh@14 -- # tr -d '"' 00:13:15.728 13:07:21 version -- app/version.sh@14 -- # cut -f2 00:13:15.728 13:07:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:15.728 13:07:21 version -- app/version.sh@20 -- # suffix=-pre 00:13:15.728 13:07:21 version -- app/version.sh@22 -- # version=25.1 00:13:15.728 13:07:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:15.728 13:07:21 version -- app/version.sh@28 -- # version=25.1rc0 00:13:15.728 13:07:21 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:15.728 13:07:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:15.728 13:07:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:13:15.728 13:07:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:13:15.728 00:13:15.728 real 0m0.251s 00:13:15.728 user 0m0.162s 00:13:15.728 sys 0m0.125s 00:13:15.728 ************************************ 00:13:15.728 END TEST version 00:13:15.728 ************************************ 00:13:15.728 13:07:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.729 13:07:22 version -- common/autotest_common.sh@10 -- # set +x 00:13:15.729 13:07:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:13:15.729 13:07:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:13:15.729 13:07:22 -- spdk/autotest.sh@194 -- # uname -s 00:13:15.729 13:07:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:13:15.729 13:07:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:15.729 13:07:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:15.729 13:07:22 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:13:15.729 13:07:22 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:13:15.729 13:07:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.729 13:07:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.729 13:07:22 -- common/autotest_common.sh@10 -- # set +x 00:13:15.729 ************************************ 00:13:15.729 START TEST blockdev_nvme 00:13:15.729 ************************************ 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:13:15.729 * Looking for test storage... 00:13:15.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.729 13:07:22 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.729 --rc genhtml_branch_coverage=1 00:13:15.729 --rc genhtml_function_coverage=1 00:13:15.729 --rc genhtml_legend=1 00:13:15.729 --rc geninfo_all_blocks=1 00:13:15.729 --rc geninfo_unexecuted_blocks=1 00:13:15.729 00:13:15.729 ' 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.729 --rc genhtml_branch_coverage=1 00:13:15.729 --rc genhtml_function_coverage=1 00:13:15.729 --rc genhtml_legend=1 00:13:15.729 --rc geninfo_all_blocks=1 00:13:15.729 --rc geninfo_unexecuted_blocks=1 00:13:15.729 00:13:15.729 ' 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.729 --rc genhtml_branch_coverage=1 00:13:15.729 --rc genhtml_function_coverage=1 00:13:15.729 --rc genhtml_legend=1 00:13:15.729 --rc geninfo_all_blocks=1 00:13:15.729 --rc geninfo_unexecuted_blocks=1 00:13:15.729 00:13:15.729 ' 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.729 --rc genhtml_branch_coverage=1 00:13:15.729 --rc genhtml_function_coverage=1 00:13:15.729 --rc genhtml_legend=1 00:13:15.729 --rc geninfo_all_blocks=1 00:13:15.729 --rc geninfo_unexecuted_blocks=1 00:13:15.729 00:13:15.729 ' 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:15.729 13:07:22 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:13:15.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61482 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:15.729 13:07:22 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61482 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61482 ']' 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.729 13:07:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.987 [2024-12-06 13:07:22.386965] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:15.987 [2024-12-06 13:07:22.387326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61482 ] 00:13:16.245 [2024-12-06 13:07:22.558793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.245 [2024-12-06 13:07:22.664110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.177 13:07:23 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.177 13:07:23 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:13:17.177 13:07:23 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:13:17.177 13:07:23 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:13:17.177 13:07:23 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:13:17.177 13:07:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:13:17.177 13:07:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:17.177 13:07:23 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:13:17.177 13:07:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.177 13:07:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 13:07:23 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 13:07:23 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:13:17.436 13:07:23 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 13:07:23 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 13:07:23 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 13:07:23 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:13:17.436 13:07:23 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.436 13:07:23 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.436 13:07:23 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.436 13:07:23 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:13:17.436 13:07:23 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:13:17.437 13:07:23 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "34131205-8a3a-494e-a6ec-bd476cfd0b73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "34131205-8a3a-494e-a6ec-bd476cfd0b73",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "57158abd-8a9b-4083-b4b8-c831b3ef1add"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "57158abd-8a9b-4083-b4b8-c831b3ef1add",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8504d4de-d1e2-4d21-aa01-25c2da3331fd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8504d4de-d1e2-4d21-aa01-25c2da3331fd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "45eeec12-b12b-412e-8604-ec9d5a9c808f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "45eeec12-b12b-412e-8604-ec9d5a9c808f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6a233d3a-b629-4371-807e-a2e635d6d112"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a233d3a-b629-4371-807e-a2e635d6d112",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4659ddce-4031-49d1-bd29-f0725db7fbfc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4659ddce-4031-49d1-bd29-f0725db7fbfc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:13:17.695 13:07:23 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:13:17.695 13:07:24 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:13:17.695 13:07:24 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:13:17.695 13:07:24 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61482 00:13:17.695 13:07:24 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61482 ']' 00:13:17.695 13:07:24 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61482 00:13:17.695 13:07:24 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:13:17.695 13:07:24 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.695 13:07:24 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61482 00:13:17.695 killing process with pid 61482 00:13:17.695 13:07:24 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.695 13:07:24 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.695 13:07:24 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61482' 00:13:17.695 13:07:24 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61482 00:13:17.695 13:07:24 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61482 00:13:19.590 13:07:26 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:19.590 13:07:26 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:19.590 13:07:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:19.590 13:07:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.590 13:07:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.867 ************************************ 00:13:19.867 START TEST bdev_hello_world 00:13:19.867 ************************************ 00:13:19.867 13:07:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:19.867 [2024-12-06 13:07:26.225931] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:19.867 [2024-12-06 13:07:26.226331] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61572 ] 00:13:20.126 [2024-12-06 13:07:26.436601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.126 [2024-12-06 13:07:26.548140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.693 [2024-12-06 13:07:27.170694] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:20.693 [2024-12-06 13:07:27.171016] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:13:20.693 [2024-12-06 13:07:27.171086] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:20.693 [2024-12-06 13:07:27.174293] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:20.693 [2024-12-06 13:07:27.174871] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:20.693 [2024-12-06 13:07:27.174918] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:20.693 [2024-12-06 13:07:27.175140] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:20.693 00:13:20.693 [2024-12-06 13:07:27.175184] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:22.070 00:13:22.070 real 0m2.045s 00:13:22.070 user 0m1.711s 00:13:22.070 sys 0m0.223s 00:13:22.070 13:07:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.070 13:07:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:22.070 ************************************ 00:13:22.070 END TEST bdev_hello_world 00:13:22.070 ************************************ 00:13:22.070 13:07:28 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:13:22.070 13:07:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.070 13:07:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.070 13:07:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.070 ************************************ 00:13:22.070 START TEST bdev_bounds 00:13:22.070 ************************************ 00:13:22.070 Process bdevio pid: 61614 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61614 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61614' 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61614 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61614 ']' 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.070 13:07:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:22.070 [2024-12-06 13:07:28.348597] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:22.070 [2024-12-06 13:07:28.348776] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61614 ] 00:13:22.070 [2024-12-06 13:07:28.533240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.329 [2024-12-06 13:07:28.642681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.329 [2024-12-06 13:07:28.642795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.329 [2024-12-06 13:07:28.642801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.267 13:07:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.267 13:07:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:23.267 13:07:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:23.267 I/O targets: 00:13:23.267 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:23.267 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:23.267 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:23.267 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:23.267 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:23.267 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:23.267 00:13:23.267 00:13:23.267 CUnit - A unit testing framework for C - Version 2.1-3 00:13:23.267 http://cunit.sourceforge.net/ 00:13:23.267 00:13:23.267 00:13:23.267 Suite: bdevio tests on: Nvme3n1 00:13:23.267 Test: blockdev write read block ...passed 00:13:23.267 Test: blockdev write zeroes read block ...passed 00:13:23.267 Test: blockdev write zeroes read no split ...passed 00:13:23.267 Test: blockdev write zeroes read split ...passed 00:13:23.267 Test: blockdev write zeroes read split partial ...passed 00:13:23.267 Test: blockdev reset ...[2024-12-06 13:07:29.640543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:13:23.267 passed 00:13:23.267 Test: blockdev write read 8 blocks ...[2024-12-06 13:07:29.644866] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:13:23.267 passed 00:13:23.267 Test: blockdev write read size > 128k ...passed 00:13:23.267 Test: blockdev write read invalid size ...passed 00:13:23.267 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:23.267 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:23.267 Test: blockdev write read max offset ...passed 00:13:23.267 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:23.267 Test: blockdev writev readv 8 blocks ...passed 00:13:23.267 Test: blockdev writev readv 30 x 1block ...passed 00:13:23.267 Test: blockdev writev readv block ...passed 00:13:23.267 Test: blockdev writev readv size > 128k ...passed 00:13:23.267 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:23.267 Test: blockdev comparev and writev ...[2024-12-06 13:07:29.653309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bdc0a000 len:0x1000 00:13:23.267 [2024-12-06 13:07:29.653382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:23.267 passed 00:13:23.267 Test: blockdev nvme passthru rw ...passed 00:13:23.267 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:07:29.654343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:13:23.267 Test: blockdev nvme admin passthru ...RP2 0x0 00:13:23.267 [2024-12-06 13:07:29.654516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:23.267 passed 00:13:23.267 Test: blockdev copy ...passed 00:13:23.267 Suite: bdevio tests on: Nvme2n3 00:13:23.267 Test: blockdev write read block ...passed 00:13:23.267 Test: blockdev write zeroes read block ...passed 00:13:23.267 Test: blockdev write zeroes read no split ...passed 00:13:23.267 Test: blockdev write zeroes read split ...passed 00:13:23.267 Test: blockdev write zeroes read split partial ...passed 00:13:23.267 Test: blockdev reset ...[2024-12-06 13:07:29.732077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:23.267 [2024-12-06 13:07:29.736679] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:23.267 passed 00:13:23.267 Test: blockdev write read 8 blocks ...passed 00:13:23.267 Test: blockdev write read size > 128k ...passed 00:13:23.267 Test: blockdev write read invalid size ...passed 00:13:23.267 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:23.267 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:23.267 Test: blockdev write read max offset ...passed 00:13:23.267 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:23.267 Test: blockdev writev readv 8 blocks ...passed 00:13:23.267 Test: blockdev writev readv 30 x 1block ...passed 00:13:23.267 Test: blockdev writev readv block ...passed 00:13:23.267 Test: blockdev writev readv size > 128k ...passed 00:13:23.267 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:23.267 Test: blockdev comparev and writev ...[2024-12-06 13:07:29.746826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a0e06000 len:0x1000 00:13:23.267 [2024-12-06 13:07:29.746903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:23.267 passed 00:13:23.267 Test: blockdev nvme passthru rw ...passed 00:13:23.267 Test: blockdev nvme passthru vendor specific ...passed 00:13:23.267 Test: blockdev nvme admin passthru ...[2024-12-06 13:07:29.747697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:23.267 [2024-12-06 13:07:29.747747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:23.267 passed 00:13:23.267 Test: blockdev copy ...passed 00:13:23.267 Suite: bdevio tests on: Nvme2n2 00:13:23.267 Test: blockdev write read block ...passed 00:13:23.267 Test: blockdev write zeroes read block ...passed 00:13:23.267 Test: blockdev write zeroes read no split ...passed 00:13:23.268 Test: blockdev write zeroes read split ...passed 00:13:23.526 Test: blockdev write zeroes read split partial ...passed 00:13:23.526 Test: blockdev reset ...[2024-12-06 13:07:29.810350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:23.526 [2024-12-06 13:07:29.815361] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:23.526 passed 00:13:23.526 Test: blockdev write read 8 blocks ...passed 00:13:23.526 Test: blockdev write read size > 128k ...passed 00:13:23.526 Test: blockdev write read invalid size ...passed 00:13:23.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:23.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:23.526 Test: blockdev write read max offset ...passed 00:13:23.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:23.526 Test: blockdev writev readv 8 blocks ...passed 00:13:23.526 Test: blockdev writev readv 30 x 1block ...passed 00:13:23.526 Test: blockdev writev readv block ...passed 00:13:23.526 Test: blockdev writev readv size > 128k ...passed 00:13:23.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:23.526 Test: blockdev comparev and writev ...[2024-12-06 13:07:29.824445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:13:23.526 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cdc3c000 len:0x1000 00:13:23.526 [2024-12-06 13:07:29.824641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:23.526 passed 00:13:23.526 Test: blockdev nvme passthru vendor specific ...passed 00:13:23.526 Test: blockdev nvme admin passthru ...[2024-12-06 13:07:29.825468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:23.526 [2024-12-06 13:07:29.825519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:23.526 passed 00:13:23.526 Test: blockdev copy ...passed 00:13:23.526 Suite: bdevio tests on: Nvme2n1 00:13:23.526 Test: blockdev write read block ...passed 00:13:23.526 Test: blockdev write zeroes read block ...passed 00:13:23.526 Test: blockdev write zeroes read no split ...passed 00:13:23.526 Test: blockdev write zeroes read split ...passed 00:13:23.526 Test: blockdev write zeroes read split partial ...passed 00:13:23.526 Test: blockdev reset ...[2024-12-06 13:07:29.898480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:23.526 passed 00:13:23.526 Test: blockdev write read 8 blocks ...[2024-12-06 13:07:29.902947] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:23.526 passed 00:13:23.526 Test: blockdev write read size > 128k ...passed 00:13:23.526 Test: blockdev write read invalid size ...passed 00:13:23.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:23.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:23.526 Test: blockdev write read max offset ...passed 00:13:23.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:23.526 Test: blockdev writev readv 8 blocks ...passed 00:13:23.526 Test: blockdev writev readv 30 x 1block ...passed 00:13:23.526 Test: blockdev writev readv block ...passed 00:13:23.526 Test: blockdev writev readv size > 128k ...passed 00:13:23.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:23.526 Test: blockdev comparev and writev ...[2024-12-06 13:07:29.910995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cdc38000 len:0x1000 00:13:23.526 [2024-12-06 13:07:29.911075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:23.526 passed 00:13:23.526 Test: blockdev nvme passthru rw ...passed 00:13:23.526 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:07:29.911817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:23.526 [2024-12-06 13:07:29.911868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:23.526 passed 00:13:23.526 Test: blockdev nvme admin passthru ...passed 00:13:23.526 Test: blockdev copy ...passed 00:13:23.526 Suite: bdevio tests on: Nvme1n1 00:13:23.526 Test: blockdev write read block ...passed 00:13:23.526 Test: blockdev write zeroes read block ...passed 00:13:23.526 Test: blockdev write zeroes read no split ...passed 00:13:23.526 Test: blockdev write zeroes read split ...passed 00:13:23.526 Test: blockdev write zeroes read split partial ...passed 00:13:23.526 Test: blockdev reset ...[2024-12-06 13:07:29.993237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:23.526 [2024-12-06 13:07:29.996900] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:23.526 passed 00:13:23.526 Test: blockdev write read 8 blocks ...passed 00:13:23.526 Test: blockdev write read size > 128k ...passed 00:13:23.526 Test: blockdev write read invalid size ...passed 00:13:23.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:23.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:23.526 Test: blockdev write read max offset ...passed 00:13:23.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:23.526 Test: blockdev writev readv 8 blocks ...passed 00:13:23.526 Test: blockdev writev readv 30 x 1block ...passed 00:13:23.526 Test: blockdev writev readv block ...passed 00:13:23.526 Test: blockdev writev readv size > 128k ...passed 00:13:23.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:23.526 Test: blockdev comparev and writev ...[2024-12-06 13:07:30.007618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cdc34000 len:0x1000 00:13:23.526 [2024-12-06 13:07:30.007684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:23.526 passed 00:13:23.526 Test: blockdev nvme passthru rw ...passed 00:13:23.526 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:07:30.008464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:23.526 [2024-12-06 13:07:30.008509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:23.526 passed 00:13:23.526 Test: blockdev nvme admin passthru ...passed 00:13:23.526 Test: blockdev copy ...passed 00:13:23.526 Suite: bdevio tests on: Nvme0n1 00:13:23.526 Test: blockdev write read block ...passed 00:13:23.526 Test: blockdev write zeroes read block ...passed 00:13:23.526 Test: blockdev write zeroes read no split ...passed 00:13:23.785 Test: blockdev write zeroes read split ...passed 00:13:23.785 Test: blockdev write zeroes read split partial ...passed 00:13:23.785 Test: blockdev reset ...[2024-12-06 13:07:30.091681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:23.785 [2024-12-06 13:07:30.095554] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:23.785 passed 00:13:23.785 Test: blockdev write read 8 blocks ...passed 00:13:23.785 Test: blockdev write read size > 128k ...passed 00:13:23.785 Test: blockdev write read invalid size ...passed 00:13:23.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:23.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:23.785 Test: blockdev write read max offset ...passed 00:13:23.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:23.785 Test: blockdev writev readv 8 blocks ...passed 00:13:23.785 Test: blockdev writev readv 30 x 1block ...passed 00:13:23.785 Test: blockdev writev readv block ...passed 00:13:23.785 Test: blockdev writev readv size > 128k ...passed 00:13:23.785 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:23.785 Test: blockdev comparev and writev ...passed 00:13:23.785 Test: blockdev nvme passthru rw ...[2024-12-06 13:07:30.103185] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:13:23.785 separate metadata which is not supported yet. 00:13:23.785 passed 00:13:23.785 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:07:30.103677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:13:23.785 [2024-12-06 13:07:30.103738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:13:23.785 passed 00:13:23.785 Test: blockdev nvme admin passthru ...passed 00:13:23.785 Test: blockdev copy ...passed 00:13:23.785 00:13:23.785 Run Summary: Type Total Ran Passed Failed Inactive 00:13:23.785 suites 6 6 n/a 0 0 00:13:23.785 tests 138 138 138 0 0 00:13:23.785 asserts 893 893 893 0 n/a 00:13:23.785 00:13:23.785 Elapsed time = 1.447 seconds 00:13:23.785 0 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61614 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61614 ']' 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61614 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61614 00:13:23.785 killing process with pid 61614 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61614' 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61614 00:13:23.785 13:07:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61614 00:13:24.721 ************************************ 00:13:24.721 END TEST bdev_bounds 00:13:24.721 ************************************ 00:13:24.721 13:07:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:24.721 00:13:24.721 real 0m2.925s 00:13:24.721 user 0m7.731s 00:13:24.721 sys 0m0.383s 00:13:24.721 13:07:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.721 13:07:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:24.721 13:07:31 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:24.721 13:07:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:24.721 13:07:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.721 13:07:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.721 ************************************ 00:13:24.721 START TEST bdev_nbd 00:13:24.721 ************************************ 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61679 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61679 /var/tmp/spdk-nbd.sock 00:13:24.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61679 ']' 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.721 13:07:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:24.980 [2024-12-06 13:07:31.280405] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:24.980 [2024-12-06 13:07:31.280608] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.980 [2024-12-06 13:07:31.457099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.238 [2024-12-06 13:07:31.564338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:25.805 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:26.064 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.322 1+0 records in 00:13:26.322 1+0 records out 00:13:26.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455906 s, 9.0 MB/s 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:26.322 13:07:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.581 1+0 records in 00:13:26.581 1+0 records out 00:13:26.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601449 s, 6.8 MB/s 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:26.581 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.839 1+0 records in 00:13:26.839 1+0 records out 00:13:26.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612182 s, 6.7 MB/s 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:26.839 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.405 1+0 records in 00:13:27.405 1+0 records out 00:13:27.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504348 s, 8.1 MB/s 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:27.405 13:07:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.664 1+0 records in 00:13:27.664 1+0 records out 00:13:27.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695579 s, 5.9 MB/s 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:27.664 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.923 1+0 records in 00:13:27.923 1+0 records out 00:13:27.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000891607 s, 4.6 MB/s 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:27.923 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd0", 00:13:28.490 "bdev_name": "Nvme0n1" 00:13:28.490 }, 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd1", 00:13:28.490 "bdev_name": "Nvme1n1" 00:13:28.490 }, 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd2", 00:13:28.490 "bdev_name": "Nvme2n1" 00:13:28.490 }, 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd3", 00:13:28.490 "bdev_name": "Nvme2n2" 00:13:28.490 }, 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd4", 00:13:28.490 "bdev_name": "Nvme2n3" 00:13:28.490 }, 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd5", 00:13:28.490 "bdev_name": "Nvme3n1" 00:13:28.490 } 00:13:28.490 ]' 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd0", 00:13:28.490 "bdev_name": "Nvme0n1" 00:13:28.490 }, 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd1", 00:13:28.490 "bdev_name": "Nvme1n1" 00:13:28.490 }, 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd2", 00:13:28.490 "bdev_name": "Nvme2n1" 00:13:28.490 }, 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd3", 00:13:28.490 "bdev_name": "Nvme2n2" 00:13:28.490 }, 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd4", 00:13:28.490 "bdev_name": "Nvme2n3" 00:13:28.490 }, 00:13:28.490 { 00:13:28.490 "nbd_device": "/dev/nbd5", 00:13:28.490 "bdev_name": "Nvme3n1" 00:13:28.490 } 00:13:28.490 ]' 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.490 13:07:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:28.748 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:28.748 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:28.748 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:28.748 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.748 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.748 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:28.748 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.748 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.748 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.748 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:29.005 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.005 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.005 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.005 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.005 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.005 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.005 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.005 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.005 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.005 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:29.262 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:29.262 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:29.262 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:29.262 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.262 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.262 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:29.262 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.262 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.262 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.262 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:29.519 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:29.519 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:29.519 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:29.519 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.519 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.519 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:29.519 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.519 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.519 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.519 13:07:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:30.086 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:30.086 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:30.086 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:30.086 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.086 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.086 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:30.086 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.086 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.086 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.086 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.345 13:07:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:30.624 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:30.882 /dev/nbd0 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.139 1+0 records in 00:13:31.139 1+0 records out 00:13:31.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729438 s, 5.6 MB/s 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:31.139 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:13:31.397 /dev/nbd1 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.397 1+0 records in 00:13:31.397 1+0 records out 00:13:31.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536977 s, 7.6 MB/s 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:31.397 13:07:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:13:31.655 /dev/nbd10 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.655 1+0 records in 00:13:31.655 1+0 records out 00:13:31.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671068 s, 6.1 MB/s 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:31.655 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:13:32.220 /dev/nbd11 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.220 1+0 records in 00:13:32.220 1+0 records out 00:13:32.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068311 s, 6.0 MB/s 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:32.220 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:13:32.478 /dev/nbd12 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.478 1+0 records in 00:13:32.478 1+0 records out 00:13:32.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737658 s, 5.6 MB/s 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:32.478 13:07:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:13:32.736 /dev/nbd13 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.736 1+0 records in 00:13:32.736 1+0 records out 00:13:32.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064539 s, 6.3 MB/s 00:13:32.736 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.737 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:32.737 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.737 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.737 13:07:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:32.737 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.737 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:32.737 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:32.737 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.737 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:32.995 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd0", 00:13:32.995 "bdev_name": "Nvme0n1" 00:13:32.995 }, 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd1", 00:13:32.995 "bdev_name": "Nvme1n1" 00:13:32.995 }, 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd10", 00:13:32.995 "bdev_name": "Nvme2n1" 00:13:32.995 }, 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd11", 00:13:32.995 "bdev_name": "Nvme2n2" 00:13:32.995 }, 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd12", 00:13:32.995 "bdev_name": "Nvme2n3" 00:13:32.995 }, 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd13", 00:13:32.995 "bdev_name": "Nvme3n1" 00:13:32.995 } 00:13:32.995 ]' 00:13:32.995 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:32.995 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd0", 00:13:32.995 "bdev_name": "Nvme0n1" 00:13:32.995 }, 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd1", 00:13:32.995 "bdev_name": "Nvme1n1" 00:13:32.995 }, 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd10", 00:13:32.995 "bdev_name": "Nvme2n1" 00:13:32.995 }, 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd11", 00:13:32.995 "bdev_name": "Nvme2n2" 00:13:32.995 }, 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd12", 00:13:32.995 "bdev_name": "Nvme2n3" 00:13:32.995 }, 00:13:32.995 { 00:13:32.995 "nbd_device": "/dev/nbd13", 00:13:32.995 "bdev_name": "Nvme3n1" 00:13:32.995 } 00:13:32.995 ]' 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:33.254 /dev/nbd1 00:13:33.254 /dev/nbd10 00:13:33.254 /dev/nbd11 00:13:33.254 /dev/nbd12 00:13:33.254 /dev/nbd13' 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:33.254 /dev/nbd1 00:13:33.254 /dev/nbd10 00:13:33.254 /dev/nbd11 00:13:33.254 /dev/nbd12 00:13:33.254 /dev/nbd13' 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:33.254 256+0 records in 00:13:33.254 256+0 records out 00:13:33.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102589 s, 102 MB/s 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:33.254 256+0 records in 00:13:33.254 256+0 records out 00:13:33.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125903 s, 8.3 MB/s 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.254 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:33.513 256+0 records in 00:13:33.513 256+0 records out 00:13:33.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128874 s, 8.1 MB/s 00:13:33.513 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.513 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:33.513 256+0 records in 00:13:33.513 256+0 records out 00:13:33.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134063 s, 7.8 MB/s 00:13:33.513 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.513 13:07:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:33.771 256+0 records in 00:13:33.771 256+0 records out 00:13:33.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135442 s, 7.7 MB/s 00:13:33.771 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.771 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:33.771 256+0 records in 00:13:33.771 256+0 records out 00:13:33.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128268 s, 8.2 MB/s 00:13:33.771 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.771 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:34.030 256+0 records in 00:13:34.030 256+0 records out 00:13:34.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139027 s, 7.5 MB/s 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.030 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:34.598 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.598 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.598 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.598 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.598 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.598 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.598 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.598 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.598 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.598 13:07:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:34.598 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:34.598 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:34.598 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:34.598 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.598 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.598 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:34.598 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.598 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.598 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.598 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:35.176 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:35.176 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:35.176 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:35.176 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.176 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.176 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:35.176 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.176 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.176 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.176 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:35.435 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:35.435 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:35.435 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:35.435 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.435 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.435 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:35.435 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.435 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.435 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.435 13:07:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:35.693 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:35.693 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:35.693 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:35.693 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.693 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.693 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:35.693 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.693 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.693 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.693 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.951 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:36.210 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:36.210 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:36.210 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:36.210 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:36.210 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:36.469 13:07:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:36.727 malloc_lvol_verify 00:13:36.727 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:36.985 c3ea8006-ce4d-4b25-8b07-63d810fe91f4 00:13:36.985 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:37.244 f3feacdb-154c-4709-8eda-cefc819753b8 00:13:37.244 13:07:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:37.502 /dev/nbd0 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:37.761 mke2fs 1.47.0 (5-Feb-2023) 00:13:37.761 Discarding device blocks: 0/4096 done 00:13:37.761 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:37.761 00:13:37.761 Allocating group tables: 0/1 done 00:13:37.761 Writing inode tables: 0/1 done 00:13:37.761 Creating journal (1024 blocks): done 00:13:37.761 Writing superblocks and filesystem accounting information: 0/1 done 00:13:37.761 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.761 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61679 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61679 ']' 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61679 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61679 00:13:38.020 killing process with pid 61679 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61679' 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61679 00:13:38.020 13:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61679 00:13:39.396 ************************************ 00:13:39.396 END TEST bdev_nbd 00:13:39.396 ************************************ 00:13:39.396 13:07:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:39.396 00:13:39.396 real 0m14.342s 00:13:39.396 user 0m21.229s 00:13:39.396 sys 0m4.336s 00:13:39.396 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.396 13:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:39.396 13:07:45 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:13:39.396 13:07:45 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:13:39.396 skipping fio tests on NVMe due to multi-ns failures. 00:13:39.396 13:07:45 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:39.396 13:07:45 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:39.396 13:07:45 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:39.396 13:07:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:39.396 13:07:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.397 13:07:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.397 ************************************ 00:13:39.397 START TEST bdev_verify 00:13:39.397 ************************************ 00:13:39.397 13:07:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:39.397 [2024-12-06 13:07:45.689415] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:39.397 [2024-12-06 13:07:45.689595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62102 ] 00:13:39.397 [2024-12-06 13:07:45.876975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:39.654 [2024-12-06 13:07:46.004925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.654 [2024-12-06 13:07:46.004925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.219 Running I/O for 5 seconds... 00:13:42.527 19968.00 IOPS, 78.00 MiB/s [2024-12-06T13:07:49.988Z] 20192.00 IOPS, 78.88 MiB/s [2024-12-06T13:07:50.954Z] 19925.33 IOPS, 77.83 MiB/s [2024-12-06T13:07:51.887Z] 19760.00 IOPS, 77.19 MiB/s [2024-12-06T13:07:51.887Z] 19276.80 IOPS, 75.30 MiB/s 00:13:45.359 Latency(us) 00:13:45.359 [2024-12-06T13:07:51.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.359 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0x0 length 0xbd0bd 00:13:45.359 Nvme0n1 : 5.07 1615.56 6.31 0.00 0.00 79054.23 17515.99 77213.32 00:13:45.359 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:45.359 Nvme0n1 : 5.08 1563.07 6.11 0.00 0.00 81713.92 15252.01 80549.70 00:13:45.359 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0x0 length 0xa0000 00:13:45.359 Nvme1n1 : 5.08 1614.18 6.31 0.00 0.00 78960.23 20256.58 76736.70 00:13:45.359 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0xa0000 length 0xa0000 00:13:45.359 Nvme1n1 : 5.08 1561.19 6.10 0.00 0.00 81620.57 20018.27 80073.08 00:13:45.359 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0x0 length 0x80000 00:13:45.359 Nvme2n1 : 5.08 1613.48 6.30 0.00 0.00 78831.91 21209.83 75306.82 00:13:45.359 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0x80000 length 0x80000 00:13:45.359 Nvme2n1 : 5.09 1560.16 6.09 0.00 0.00 81500.32 21209.83 79119.83 00:13:45.359 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0x0 length 0x80000 00:13:45.359 Nvme2n2 : 5.08 1612.28 6.30 0.00 0.00 78747.20 21805.61 72923.69 00:13:45.359 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0x80000 length 0x80000 00:13:45.359 Nvme2n2 : 5.09 1559.16 6.09 0.00 0.00 81356.80 19422.49 76260.07 00:13:45.359 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0x0 length 0x80000 00:13:45.359 Nvme2n3 : 5.08 1611.67 6.30 0.00 0.00 78642.28 18588.39 72923.69 00:13:45.359 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0x80000 length 0x80000 00:13:45.359 Nvme2n3 : 5.09 1558.60 6.09 0.00 0.00 81183.09 17992.61 77213.32 00:13:45.359 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0x0 length 0x20000 00:13:45.359 Nvme3n1 : 5.09 1610.59 6.29 0.00 0.00 78545.53 12153.95 76736.70 00:13:45.359 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.359 Verification LBA range: start 0x20000 length 0x20000 00:13:45.359 Nvme3n1 : 5.09 1558.02 6.09 0.00 0.00 81039.92 13226.36 80073.08 00:13:45.359 [2024-12-06T13:07:51.887Z] =================================================================================================================== 00:13:45.359 [2024-12-06T13:07:51.887Z] Total : 19037.96 74.37 0.00 0.00 80078.99 12153.95 80549.70 00:13:46.735 00:13:46.735 real 0m7.615s 00:13:46.735 user 0m14.072s 00:13:46.735 sys 0m0.263s 00:13:46.735 13:07:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.735 ************************************ 00:13:46.735 13:07:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:46.735 END TEST bdev_verify 00:13:46.735 ************************************ 00:13:46.735 13:07:53 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:46.735 13:07:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:46.735 13:07:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.735 13:07:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.735 ************************************ 00:13:46.735 START TEST bdev_verify_big_io 00:13:46.735 ************************************ 00:13:46.735 13:07:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:46.994 [2024-12-06 13:07:53.346646] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:46.994 [2024-12-06 13:07:53.346798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62206 ] 00:13:47.252 [2024-12-06 13:07:53.521909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:47.252 [2024-12-06 13:07:53.629797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.252 [2024-12-06 13:07:53.629801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.189 Running I/O for 5 seconds... 00:13:53.287 529.00 IOPS, 33.06 MiB/s [2024-12-06T13:08:00.380Z] 2340.00 IOPS, 146.25 MiB/s [2024-12-06T13:08:00.638Z] 2779.00 IOPS, 173.69 MiB/s 00:13:54.110 Latency(us) 00:13:54.110 [2024-12-06T13:08:00.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.110 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0x0 length 0xbd0b 00:13:54.111 Nvme0n1 : 5.57 124.87 7.80 0.00 0.00 988328.10 14239.19 1029510.98 00:13:54.111 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:54.111 Nvme0n1 : 5.70 117.97 7.37 0.00 0.00 1037732.59 35746.91 1006632.96 00:13:54.111 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0x0 length 0xa000 00:13:54.111 Nvme1n1 : 5.80 119.73 7.48 0.00 0.00 988238.93 76260.07 1601461.53 00:13:54.111 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0xa000 length 0xa000 00:13:54.111 Nvme1n1 : 5.80 121.43 7.59 0.00 0.00 988292.02 95325.09 903681.86 00:13:54.111 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0x0 length 0x8000 00:13:54.111 Nvme2n1 : 5.80 123.26 7.70 0.00 0.00 939846.40 94848.47 1624339.55 00:13:54.111 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0x8000 length 0x8000 00:13:54.111 Nvme2n1 : 5.89 126.75 7.92 0.00 0.00 926690.22 42657.98 926559.88 00:13:54.111 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0x0 length 0x8000 00:13:54.111 Nvme2n2 : 5.86 127.48 7.97 0.00 0.00 883177.24 55050.24 1647217.57 00:13:54.111 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0x8000 length 0x8000 00:13:54.111 Nvme2n2 : 5.89 126.63 7.91 0.00 0.00 899096.10 42657.98 953250.91 00:13:54.111 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0x0 length 0x8000 00:13:54.111 Nvme2n3 : 5.92 137.13 8.57 0.00 0.00 799941.57 28359.21 1677721.60 00:13:54.111 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0x8000 length 0x8000 00:13:54.111 Nvme2n3 : 5.90 129.19 8.07 0.00 0.00 859910.19 46232.67 1174405.12 00:13:54.111 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0x0 length 0x2000 00:13:54.111 Nvme3n1 : 5.96 153.87 9.62 0.00 0.00 694151.61 2398.02 1700599.62 00:13:54.111 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.111 Verification LBA range: start 0x2000 length 0x2000 00:13:54.111 Nvme3n1 : 5.90 140.92 8.81 0.00 0.00 767368.34 2055.45 991380.95 00:13:54.111 [2024-12-06T13:08:00.639Z] =================================================================================================================== 00:13:54.111 [2024-12-06T13:08:00.639Z] Total : 1549.22 96.83 0.00 0.00 889116.51 2055.45 1700599.62 00:13:56.070 00:13:56.070 real 0m8.884s 00:13:56.070 user 0m16.617s 00:13:56.070 sys 0m0.273s 00:13:56.070 13:08:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.070 13:08:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.070 ************************************ 00:13:56.070 END TEST bdev_verify_big_io 00:13:56.070 ************************************ 00:13:56.070 13:08:02 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:56.070 13:08:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:56.070 13:08:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.070 13:08:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.070 ************************************ 00:13:56.070 START TEST bdev_write_zeroes 00:13:56.070 ************************************ 00:13:56.070 13:08:02 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:56.070 [2024-12-06 13:08:02.302348] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:56.070 [2024-12-06 13:08:02.303366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62321 ] 00:13:56.070 [2024-12-06 13:08:02.487995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.328 [2024-12-06 13:08:02.598905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.893 Running I/O for 1 seconds... 00:13:57.827 42551.00 IOPS, 166.21 MiB/s 00:13:57.827 Latency(us) 00:13:57.827 [2024-12-06T13:08:04.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.827 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.827 Nvme0n1 : 1.03 6709.62 26.21 0.00 0.00 19029.99 6166.34 66727.56 00:13:57.827 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.827 Nvme1n1 : 1.03 7054.10 27.56 0.00 0.00 18071.88 11915.64 45517.73 00:13:57.827 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.827 Nvme2n1 : 1.04 7095.18 27.72 0.00 0.00 17892.73 11736.90 54335.30 00:13:57.827 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.827 Nvme2n2 : 1.04 7094.32 27.71 0.00 0.00 17791.70 10902.81 55288.55 00:13:57.827 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.827 Nvme2n3 : 1.04 7083.52 27.67 0.00 0.00 17766.36 9889.98 55526.87 00:13:57.827 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.827 Nvme3n1 : 1.04 7119.11 27.81 0.00 0.00 17637.63 9889.98 55765.18 00:13:57.827 [2024-12-06T13:08:04.355Z] =================================================================================================================== 00:13:57.827 [2024-12-06T13:08:04.355Z] Total : 42155.86 164.67 0.00 0.00 18021.41 6166.34 66727.56 00:13:59.201 00:13:59.201 real 0m3.271s 00:13:59.201 user 0m2.892s 00:13:59.201 sys 0m0.252s 00:13:59.201 13:08:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.201 13:08:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:59.201 ************************************ 00:13:59.201 END TEST bdev_write_zeroes 00:13:59.201 ************************************ 00:13:59.201 13:08:05 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:59.201 13:08:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:59.201 13:08:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.201 13:08:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.201 ************************************ 00:13:59.201 START TEST bdev_json_nonenclosed 00:13:59.201 ************************************ 00:13:59.201 13:08:05 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:59.201 [2024-12-06 13:08:05.619314] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:59.201 [2024-12-06 13:08:05.619486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62374 ] 00:13:59.460 [2024-12-06 13:08:05.805030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.460 [2024-12-06 13:08:05.931312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.460 [2024-12-06 13:08:05.931459] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:59.460 [2024-12-06 13:08:05.931491] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:59.460 [2024-12-06 13:08:05.931508] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:59.718 00:13:59.718 real 0m0.697s 00:13:59.718 user 0m0.465s 00:13:59.718 sys 0m0.126s 00:13:59.718 13:08:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.718 13:08:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:59.718 ************************************ 00:13:59.718 END TEST bdev_json_nonenclosed 00:13:59.718 ************************************ 00:13:59.976 13:08:06 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:59.976 13:08:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:59.976 13:08:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.976 13:08:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.976 ************************************ 00:13:59.976 START TEST bdev_json_nonarray 00:13:59.976 ************************************ 00:13:59.976 13:08:06 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:59.976 [2024-12-06 13:08:06.378555] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:59.976 [2024-12-06 13:08:06.378751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62405 ] 00:14:00.237 [2024-12-06 13:08:06.573493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.237 [2024-12-06 13:08:06.748117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.237 [2024-12-06 13:08:06.748260] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:00.237 [2024-12-06 13:08:06.748294] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:00.237 [2024-12-06 13:08:06.748311] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:00.805 00:14:00.805 real 0m0.767s 00:14:00.805 user 0m0.513s 00:14:00.805 sys 0m0.147s 00:14:00.805 13:08:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.805 13:08:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:00.805 ************************************ 00:14:00.805 END TEST bdev_json_nonarray 00:14:00.805 ************************************ 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:14:00.805 13:08:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:14:00.805 00:14:00.805 real 0m45.008s 00:14:00.805 user 1m9.608s 00:14:00.805 sys 0m6.850s 00:14:00.805 13:08:07 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.805 13:08:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.805 ************************************ 00:14:00.805 END TEST blockdev_nvme 00:14:00.805 ************************************ 00:14:00.805 13:08:07 -- spdk/autotest.sh@209 -- # uname -s 00:14:00.805 13:08:07 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:14:00.805 13:08:07 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:14:00.805 13:08:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:00.805 13:08:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.805 13:08:07 -- common/autotest_common.sh@10 -- # set +x 00:14:00.805 ************************************ 00:14:00.805 START TEST blockdev_nvme_gpt 00:14:00.805 ************************************ 00:14:00.805 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:14:00.805 * Looking for test storage... 00:14:00.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:00.805 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:00.805 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:00.805 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:14:00.805 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.805 13:08:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:14:00.805 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.805 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:00.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.805 --rc genhtml_branch_coverage=1 00:14:00.805 --rc genhtml_function_coverage=1 00:14:00.805 --rc genhtml_legend=1 00:14:00.805 --rc geninfo_all_blocks=1 00:14:00.805 --rc geninfo_unexecuted_blocks=1 00:14:00.805 00:14:00.805 ' 00:14:00.805 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:00.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.805 --rc genhtml_branch_coverage=1 00:14:00.805 --rc genhtml_function_coverage=1 00:14:00.805 --rc genhtml_legend=1 00:14:00.805 --rc geninfo_all_blocks=1 00:14:00.805 --rc geninfo_unexecuted_blocks=1 00:14:00.805 00:14:00.805 ' 00:14:00.805 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:00.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.805 --rc genhtml_branch_coverage=1 00:14:00.805 --rc genhtml_function_coverage=1 00:14:00.805 --rc genhtml_legend=1 00:14:00.805 --rc geninfo_all_blocks=1 00:14:00.805 --rc geninfo_unexecuted_blocks=1 00:14:00.805 00:14:00.805 ' 00:14:00.805 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:00.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.805 --rc genhtml_branch_coverage=1 00:14:00.805 --rc genhtml_function_coverage=1 00:14:00.805 --rc genhtml_legend=1 00:14:00.805 --rc geninfo_all_blocks=1 00:14:00.805 --rc geninfo_unexecuted_blocks=1 00:14:00.805 00:14:00.805 ' 00:14:00.805 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:00.805 13:08:07 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:14:00.805 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:00.805 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62489 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62489 00:14:00.806 13:08:07 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:00.806 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62489 ']' 00:14:00.806 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.806 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.806 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.806 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.806 13:08:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:01.064 [2024-12-06 13:08:07.438558] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:01.064 [2024-12-06 13:08:07.438753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62489 ] 00:14:01.322 [2024-12-06 13:08:07.631595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.322 [2024-12-06 13:08:07.760796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.259 13:08:08 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.259 13:08:08 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:14:02.259 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:14:02.259 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:14:02.259 13:08:08 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:02.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:02.777 Waiting for block devices as requested 00:14:02.777 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:02.777 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:02.777 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:03.040 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:08.304 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:14:08.304 BYT; 00:14:08.304 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:14:08.304 BYT; 00:14:08.304 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:14:08.304 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:14:08.304 13:08:14 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:14:08.305 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:14:08.305 13:08:14 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:14:09.247 The operation has completed successfully. 00:14:09.247 13:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:14:10.179 The operation has completed successfully. 00:14:10.179 13:08:16 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:10.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:11.309 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.309 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.309 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.309 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.309 13:08:17 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:14:11.309 13:08:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.309 13:08:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:11.309 [] 00:14:11.309 13:08:17 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.309 13:08:17 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:14:11.309 13:08:17 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:14:11.309 13:08:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:14:11.309 13:08:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:11.567 13:08:17 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:14:11.567 13:08:17 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.567 13:08:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:11.825 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.825 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:14:11.825 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.825 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:11.825 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.825 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:14:11.825 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:14:11.825 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.825 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:11.825 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.826 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:14:11.826 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.826 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:11.826 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.826 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:11.826 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.826 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:11.826 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.826 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:14:11.826 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:14:11.826 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:14:11.826 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.826 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:12.085 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.085 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:14:12.085 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:14:12.086 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "52994de6-10f4-4491-a38c-4d56e68c410a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "52994de6-10f4-4491-a38c-4d56e68c410a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e5226a5f-d041-49f2-af2f-9b1bd84ec74a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e5226a5f-d041-49f2-af2f-9b1bd84ec74a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "96f72011-1475-413c-b6fe-e8d471987ae7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "96f72011-1475-413c-b6fe-e8d471987ae7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "5edccdfb-b341-4bd0-ad10-cd4d462a1ce0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5edccdfb-b341-4bd0-ad10-cd4d462a1ce0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "354aa5da-d093-406d-bff3-f60d444f9bac"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "354aa5da-d093-406d-bff3-f60d444f9bac",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:14:12.086 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:14:12.086 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:14:12.086 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:14:12.086 13:08:18 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62489 00:14:12.086 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62489 ']' 00:14:12.086 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62489 00:14:12.086 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:14:12.086 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.086 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62489 00:14:12.086 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.086 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.086 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62489' 00:14:12.086 killing process with pid 62489 00:14:12.086 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62489 00:14:12.086 13:08:18 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62489 00:14:14.619 13:08:20 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:14.619 13:08:20 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:14:14.619 13:08:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:14.619 13:08:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.619 13:08:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:14.619 ************************************ 00:14:14.619 START TEST bdev_hello_world 00:14:14.619 ************************************ 00:14:14.620 13:08:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:14:14.620 [2024-12-06 13:08:20.651115] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:14.620 [2024-12-06 13:08:20.651290] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63125 ] 00:14:14.620 [2024-12-06 13:08:20.834364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.620 [2024-12-06 13:08:20.979525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.191 [2024-12-06 13:08:21.633391] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:15.191 [2024-12-06 13:08:21.633448] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:14:15.191 [2024-12-06 13:08:21.633486] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:15.191 [2024-12-06 13:08:21.636558] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:15.191 [2024-12-06 13:08:21.637129] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:15.191 [2024-12-06 13:08:21.637172] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:15.191 [2024-12-06 13:08:21.637473] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:15.191 00:14:15.191 [2024-12-06 13:08:21.637530] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:16.571 00:14:16.571 real 0m2.092s 00:14:16.571 user 0m1.742s 00:14:16.571 sys 0m0.237s 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:16.571 ************************************ 00:14:16.571 END TEST bdev_hello_world 00:14:16.571 ************************************ 00:14:16.571 13:08:22 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:14:16.571 13:08:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:16.571 13:08:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.571 13:08:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:16.571 ************************************ 00:14:16.571 START TEST bdev_bounds 00:14:16.571 ************************************ 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63167 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:16.571 Process bdevio pid: 63167 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63167' 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63167 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63167 ']' 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.571 13:08:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:16.571 [2024-12-06 13:08:22.804608] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:16.571 [2024-12-06 13:08:22.804805] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63167 ] 00:14:16.571 [2024-12-06 13:08:22.990220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.835 [2024-12-06 13:08:23.129445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.835 [2024-12-06 13:08:23.129530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.835 [2024-12-06 13:08:23.129531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.401 13:08:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.401 13:08:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:14:17.401 13:08:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:17.660 I/O targets: 00:14:17.660 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:17.660 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:14:17.660 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:14:17.660 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:17.660 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:17.660 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:17.660 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:17.660 00:14:17.660 00:14:17.660 CUnit - A unit testing framework for C - Version 2.1-3 00:14:17.660 http://cunit.sourceforge.net/ 00:14:17.660 00:14:17.660 00:14:17.660 Suite: bdevio tests on: Nvme3n1 00:14:17.660 Test: blockdev write read block ...passed 00:14:17.660 Test: blockdev write zeroes read block ...passed 00:14:17.660 Test: blockdev write zeroes read no split ...passed 00:14:17.660 Test: blockdev write zeroes read split ...passed 00:14:17.660 Test: blockdev write zeroes read split partial ...passed 00:14:17.660 Test: blockdev reset ...[2024-12-06 13:08:24.003325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:14:17.660 [2024-12-06 13:08:24.007155] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:14:17.660 passed 00:14:17.660 Test: blockdev write read 8 blocks ...passed 00:14:17.660 Test: blockdev write read size > 128k ...passed 00:14:17.660 Test: blockdev write read invalid size ...passed 00:14:17.660 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:17.660 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:17.660 Test: blockdev write read max offset ...passed 00:14:17.660 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:17.660 Test: blockdev writev readv 8 blocks ...passed 00:14:17.660 Test: blockdev writev readv 30 x 1block ...passed 00:14:17.660 Test: blockdev writev readv block ...passed 00:14:17.660 Test: blockdev writev readv size > 128k ...passed 00:14:17.660 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:17.660 Test: blockdev comparev and writev ...[2024-12-06 13:08:24.015138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb404000 len:0x1000 00:14:17.660 [2024-12-06 13:08:24.015204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:17.660 passed 00:14:17.660 Test: blockdev nvme passthru rw ...passed 00:14:17.660 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:08:24.016000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:17.660 [2024-12-06 13:08:24.016046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:17.660 passed 00:14:17.660 Test: blockdev nvme admin passthru ...passed 00:14:17.660 Test: blockdev copy ...passed 00:14:17.660 Suite: bdevio tests on: Nvme2n3 00:14:17.660 Test: blockdev write read block ...passed 00:14:17.660 Test: blockdev write zeroes read block ...passed 00:14:17.660 Test: blockdev write zeroes read no split ...passed 00:14:17.660 Test: blockdev write zeroes read split ...passed 00:14:17.660 Test: blockdev write zeroes read split partial ...passed 00:14:17.660 Test: blockdev reset ...[2024-12-06 13:08:24.082945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:14:17.660 [2024-12-06 13:08:24.087458] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:14:17.660 passed 00:14:17.660 Test: blockdev write read 8 blocks ...passed 00:14:17.660 Test: blockdev write read size > 128k ...passed 00:14:17.660 Test: blockdev write read invalid size ...passed 00:14:17.660 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:17.660 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:17.660 Test: blockdev write read max offset ...passed 00:14:17.660 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:17.660 Test: blockdev writev readv 8 blocks ...passed 00:14:17.660 Test: blockdev writev readv 30 x 1block ...passed 00:14:17.660 Test: blockdev writev readv block ...passed 00:14:17.660 Test: blockdev writev readv size > 128k ...passed 00:14:17.660 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:17.660 Test: blockdev comparev and writev ...[2024-12-06 13:08:24.095238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb402000 len:0x1000 00:14:17.660 [2024-12-06 13:08:24.095313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:17.660 passed 00:14:17.660 Test: blockdev nvme passthru rw ...passed 00:14:17.661 Test: blockdev nvme passthru vendor specific ...passed 00:14:17.661 Test: blockdev nvme admin passthru ...[2024-12-06 13:08:24.096043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:17.661 [2024-12-06 13:08:24.096093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:17.661 passed 00:14:17.661 Test: blockdev copy ...passed 00:14:17.661 Suite: bdevio tests on: Nvme2n2 00:14:17.661 Test: blockdev write read block ...passed 00:14:17.661 Test: blockdev write zeroes read block ...passed 00:14:17.661 Test: blockdev write zeroes read no split ...passed 00:14:17.661 Test: blockdev write zeroes read split ...passed 00:14:17.661 Test: blockdev write zeroes read split partial ...passed 00:14:17.661 Test: blockdev reset ...[2024-12-06 13:08:24.162550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:14:17.661 [2024-12-06 13:08:24.166919] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:14:17.661 passed 00:14:17.661 Test: blockdev write read 8 blocks ...passed 00:14:17.661 Test: blockdev write read size > 128k ...passed 00:14:17.661 Test: blockdev write read invalid size ...passed 00:14:17.661 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:17.661 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:17.661 Test: blockdev write read max offset ...passed 00:14:17.661 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:17.661 Test: blockdev writev readv 8 blocks ...passed 00:14:17.661 Test: blockdev writev readv 30 x 1block ...passed 00:14:17.661 Test: blockdev writev readv block ...passed 00:14:17.661 Test: blockdev writev readv size > 128k ...passed 00:14:17.661 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:17.661 Test: blockdev comparev and writev ...[2024-12-06 13:08:24.174682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfa38000 len:0x1000 00:14:17.661 [2024-12-06 13:08:24.174745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:17.661 passed 00:14:17.661 Test: blockdev nvme passthru rw ...passed 00:14:17.661 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:08:24.175642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:17.661 [2024-12-06 13:08:24.175686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:17.661 passed 00:14:17.661 Test: blockdev nvme admin passthru ...passed 00:14:17.661 Test: blockdev copy ...passed 00:14:17.661 Suite: bdevio tests on: Nvme2n1 00:14:17.661 Test: blockdev write read block ...passed 00:14:17.661 Test: blockdev write zeroes read block ...passed 00:14:17.919 Test: blockdev write zeroes read no split ...passed 00:14:17.919 Test: blockdev write zeroes read split ...passed 00:14:17.919 Test: blockdev write zeroes read split partial ...passed 00:14:17.919 Test: blockdev reset ...[2024-12-06 13:08:24.241692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:14:17.919 [2024-12-06 13:08:24.246031] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:14:17.919 passed 00:14:17.919 Test: blockdev write read 8 blocks ...passed 00:14:17.919 Test: blockdev write read size > 128k ...passed 00:14:17.919 Test: blockdev write read invalid size ...passed 00:14:17.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:17.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:17.919 Test: blockdev write read max offset ...passed 00:14:17.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:17.919 Test: blockdev writev readv 8 blocks ...passed 00:14:17.919 Test: blockdev writev readv 30 x 1block ...passed 00:14:17.919 Test: blockdev writev readv block ...passed 00:14:17.919 Test: blockdev writev readv size > 128k ...passed 00:14:17.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:17.919 Test: blockdev comparev and writev ...[2024-12-06 13:08:24.253629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfa34000 len:0x1000 00:14:17.919 [2024-12-06 13:08:24.253695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:17.919 passed 00:14:17.919 Test: blockdev nvme passthru rw ...passed 00:14:17.919 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:08:24.254571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:17.919 [2024-12-06 13:08:24.254615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:17.919 passed 00:14:17.919 Test: blockdev nvme admin passthru ...passed 00:14:17.919 Test: blockdev copy ...passed 00:14:17.919 Suite: bdevio tests on: Nvme1n1p2 00:14:17.919 Test: blockdev write read block ...passed 00:14:17.919 Test: blockdev write zeroes read block ...passed 00:14:17.919 Test: blockdev write zeroes read no split ...passed 00:14:17.919 Test: blockdev write zeroes read split ...passed 00:14:17.919 Test: blockdev write zeroes read split partial ...passed 00:14:17.919 Test: blockdev reset ...[2024-12-06 13:08:24.325127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:14:17.919 [2024-12-06 13:08:24.329033] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:14:17.919 passed 00:14:17.919 Test: blockdev write read 8 blocks ...passed 00:14:17.919 Test: blockdev write read size > 128k ...passed 00:14:17.919 Test: blockdev write read invalid size ...passed 00:14:17.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:17.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:17.920 Test: blockdev write read max offset ...passed 00:14:17.920 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:17.920 Test: blockdev writev readv 8 blocks ...passed 00:14:17.920 Test: blockdev writev readv 30 x 1block ...passed 00:14:17.920 Test: blockdev writev readv block ...passed 00:14:17.920 Test: blockdev writev readv size > 128k ...passed 00:14:17.920 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:17.920 Test: blockdev comparev and writev ...[2024-12-06 13:08:24.337702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cfa30000 len:0x1000 00:14:17.920 [2024-12-06 13:08:24.337790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:17.920 passed 00:14:17.920 Test: blockdev nvme passthru rw ...passed 00:14:17.920 Test: blockdev nvme passthru vendor specific ...passed 00:14:17.920 Test: blockdev nvme admin passthru ...passed 00:14:17.920 Test: blockdev copy ...passed 00:14:17.920 Suite: bdevio tests on: Nvme1n1p1 00:14:17.920 Test: blockdev write read block ...passed 00:14:17.920 Test: blockdev write zeroes read block ...passed 00:14:17.920 Test: blockdev write zeroes read no split ...passed 00:14:17.920 Test: blockdev write zeroes read split ...passed 00:14:17.920 Test: blockdev write zeroes read split partial ...passed 00:14:17.920 Test: blockdev reset ...[2024-12-06 13:08:24.398159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:14:17.920 [2024-12-06 13:08:24.402127] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:14:17.920 passed 00:14:17.920 Test: blockdev write read 8 blocks ...passed 00:14:17.920 Test: blockdev write read size > 128k ...passed 00:14:17.920 Test: blockdev write read invalid size ...passed 00:14:17.920 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:17.920 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:17.920 Test: blockdev write read max offset ...passed 00:14:17.920 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:17.920 Test: blockdev writev readv 8 blocks ...passed 00:14:17.920 Test: blockdev writev readv 30 x 1block ...passed 00:14:17.920 Test: blockdev writev readv block ...passed 00:14:17.920 Test: blockdev writev readv size > 128k ...passed 00:14:17.920 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:17.920 Test: blockdev comparev and writev ...[2024-12-06 13:08:24.410347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bbe0e000 len:0x1000 00:14:17.920 [2024-12-06 13:08:24.410411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:17.920 passed 00:14:17.920 Test: blockdev nvme passthru rw ...passed 00:14:17.920 Test: blockdev nvme passthru vendor specific ...passed 00:14:17.920 Test: blockdev nvme admin passthru ...passed 00:14:17.920 Test: blockdev copy ...passed 00:14:17.920 Suite: bdevio tests on: Nvme0n1 00:14:17.920 Test: blockdev write read block ...passed 00:14:17.920 Test: blockdev write zeroes read block ...passed 00:14:17.920 Test: blockdev write zeroes read no split ...passed 00:14:18.178 Test: blockdev write zeroes read split ...passed 00:14:18.178 Test: blockdev write zeroes read split partial ...passed 00:14:18.178 Test: blockdev reset ...[2024-12-06 13:08:24.473821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:14:18.178 [2024-12-06 13:08:24.477780] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:14:18.178 passed 00:14:18.178 Test: blockdev write read 8 blocks ...passed 00:14:18.178 Test: blockdev write read size > 128k ...passed 00:14:18.178 Test: blockdev write read invalid size ...passed 00:14:18.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:18.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:18.178 Test: blockdev write read max offset ...passed 00:14:18.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:18.178 Test: blockdev writev readv 8 blocks ...passed 00:14:18.178 Test: blockdev writev readv 30 x 1block ...passed 00:14:18.178 Test: blockdev writev readv block ...passed 00:14:18.178 Test: blockdev writev readv size > 128k ...passed 00:14:18.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:18.178 Test: blockdev comparev and writev ...passed 00:14:18.178 Test: blockdev nvme passthru rw ...[2024-12-06 13:08:24.484672] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:14:18.178 separate metadata which is not supported yet. 00:14:18.178 passed 00:14:18.178 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:08:24.485166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:14:18.178 [2024-12-06 13:08:24.485223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:14:18.178 passed 00:14:18.178 Test: blockdev nvme admin passthru ...passed 00:14:18.178 Test: blockdev copy ...passed 00:14:18.178 00:14:18.178 Run Summary: Type Total Ran Passed Failed Inactive 00:14:18.178 suites 7 7 n/a 0 0 00:14:18.178 tests 161 161 161 0 0 00:14:18.178 asserts 1025 1025 1025 0 n/a 00:14:18.178 00:14:18.178 Elapsed time = 1.486 seconds 00:14:18.179 0 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63167 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63167 ']' 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63167 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63167 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:18.179 killing process with pid 63167 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63167' 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63167 00:14:18.179 13:08:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63167 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:19.123 00:14:19.123 real 0m2.773s 00:14:19.123 user 0m7.149s 00:14:19.123 sys 0m0.380s 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:19.123 ************************************ 00:14:19.123 END TEST bdev_bounds 00:14:19.123 ************************************ 00:14:19.123 13:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:14:19.123 13:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:19.123 13:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.123 13:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:19.123 ************************************ 00:14:19.123 START TEST bdev_nbd 00:14:19.123 ************************************ 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:14:19.123 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63222 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63222 /var/tmp/spdk-nbd.sock 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63222 ']' 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.124 13:08:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:19.124 [2024-12-06 13:08:25.647410] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:19.124 [2024-12-06 13:08:25.648110] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.382 [2024-12-06 13:08:25.836210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.639 [2024-12-06 13:08:25.942484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:20.202 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:14:20.766 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:20.766 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:20.766 13:08:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:20.766 13:08:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:20.766 13:08:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:20.766 13:08:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.766 13:08:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.766 13:08:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.766 1+0 records in 00:14:20.766 1+0 records out 00:14:20.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570441 s, 7.2 MB/s 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:20.766 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.023 1+0 records in 00:14:21.023 1+0 records out 00:14:21.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518803 s, 7.9 MB/s 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:21.023 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.280 1+0 records in 00:14:21.280 1+0 records out 00:14:21.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070402 s, 5.8 MB/s 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:21.280 13:08:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:14:21.538 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:21.538 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:21.538 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:21.538 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:14:21.538 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:21.538 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.796 1+0 records in 00:14:21.796 1+0 records out 00:14:21.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606835 s, 6.7 MB/s 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:21.796 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.053 1+0 records in 00:14:22.053 1+0 records out 00:14:22.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701728 s, 5.8 MB/s 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:22.053 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.311 1+0 records in 00:14:22.311 1+0 records out 00:14:22.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105575 s, 3.9 MB/s 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:22.311 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.312 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:22.312 13:08:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:22.312 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:22.312 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:22.312 13:08:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.912 1+0 records in 00:14:22.912 1+0 records out 00:14:22.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000886822 s, 4.6 MB/s 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:22.912 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd0", 00:14:23.170 "bdev_name": "Nvme0n1" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd1", 00:14:23.170 "bdev_name": "Nvme1n1p1" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd2", 00:14:23.170 "bdev_name": "Nvme1n1p2" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd3", 00:14:23.170 "bdev_name": "Nvme2n1" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd4", 00:14:23.170 "bdev_name": "Nvme2n2" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd5", 00:14:23.170 "bdev_name": "Nvme2n3" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd6", 00:14:23.170 "bdev_name": "Nvme3n1" 00:14:23.170 } 00:14:23.170 ]' 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd0", 00:14:23.170 "bdev_name": "Nvme0n1" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd1", 00:14:23.170 "bdev_name": "Nvme1n1p1" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd2", 00:14:23.170 "bdev_name": "Nvme1n1p2" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd3", 00:14:23.170 "bdev_name": "Nvme2n1" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd4", 00:14:23.170 "bdev_name": "Nvme2n2" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd5", 00:14:23.170 "bdev_name": "Nvme2n3" 00:14:23.170 }, 00:14:23.170 { 00:14:23.170 "nbd_device": "/dev/nbd6", 00:14:23.170 "bdev_name": "Nvme3n1" 00:14:23.170 } 00:14:23.170 ]' 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.170 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:23.428 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.428 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.428 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.428 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.428 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.428 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.428 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:23.428 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.428 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.428 13:08:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:23.686 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:23.686 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:23.686 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:23.686 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.686 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.686 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:23.686 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:23.686 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.686 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.686 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:24.252 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:24.252 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:24.252 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:24.252 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.252 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.252 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:24.252 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:24.252 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.252 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.252 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:24.510 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:24.510 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:24.510 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:24.510 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.510 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.510 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:24.510 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:24.510 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.510 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.510 13:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:24.769 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:24.769 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:24.769 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:24.769 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.769 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.769 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:24.769 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:24.769 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.769 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.769 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:25.026 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:25.026 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:25.026 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:25.026 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.026 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.026 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:25.026 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.026 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.026 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.026 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:25.592 13:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:25.851 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:14:26.110 /dev/nbd0 00:14:26.110 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:26.110 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:26.110 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:26.110 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:26.110 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:26.110 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:26.110 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:26.110 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:26.110 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:26.110 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:26.111 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.111 1+0 records in 00:14:26.111 1+0 records out 00:14:26.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529671 s, 7.7 MB/s 00:14:26.111 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.111 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:26.111 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.111 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.111 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:26.111 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.111 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:26.111 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:14:26.370 /dev/nbd1 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:26.370 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.627 1+0 records in 00:14:26.627 1+0 records out 00:14:26.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696826 s, 5.9 MB/s 00:14:26.627 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.627 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:26.627 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.627 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.627 13:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:26.627 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.627 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:26.627 13:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:14:26.885 /dev/nbd10 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.885 1+0 records in 00:14:26.885 1+0 records out 00:14:26.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654411 s, 6.3 MB/s 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.885 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:26.886 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:26.886 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.886 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:26.886 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:14:27.144 /dev/nbd11 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.144 1+0 records in 00:14:27.144 1+0 records out 00:14:27.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530149 s, 7.7 MB/s 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:27.144 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:27.145 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.145 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:27.145 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:14:27.403 /dev/nbd12 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.403 1+0 records in 00:14:27.403 1+0 records out 00:14:27.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752979 s, 5.4 MB/s 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:27.403 13:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:14:27.971 /dev/nbd13 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.971 1+0 records in 00:14:27.971 1+0 records out 00:14:27.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587636 s, 7.0 MB/s 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:27.971 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:14:28.230 /dev/nbd14 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.230 1+0 records in 00:14:28.230 1+0 records out 00:14:28.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000956424 s, 4.3 MB/s 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.230 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:28.489 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd0", 00:14:28.489 "bdev_name": "Nvme0n1" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd1", 00:14:28.489 "bdev_name": "Nvme1n1p1" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd10", 00:14:28.489 "bdev_name": "Nvme1n1p2" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd11", 00:14:28.489 "bdev_name": "Nvme2n1" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd12", 00:14:28.489 "bdev_name": "Nvme2n2" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd13", 00:14:28.489 "bdev_name": "Nvme2n3" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd14", 00:14:28.489 "bdev_name": "Nvme3n1" 00:14:28.489 } 00:14:28.489 ]' 00:14:28.489 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:28.489 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd0", 00:14:28.489 "bdev_name": "Nvme0n1" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd1", 00:14:28.489 "bdev_name": "Nvme1n1p1" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd10", 00:14:28.489 "bdev_name": "Nvme1n1p2" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd11", 00:14:28.489 "bdev_name": "Nvme2n1" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd12", 00:14:28.489 "bdev_name": "Nvme2n2" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd13", 00:14:28.489 "bdev_name": "Nvme2n3" 00:14:28.489 }, 00:14:28.489 { 00:14:28.489 "nbd_device": "/dev/nbd14", 00:14:28.489 "bdev_name": "Nvme3n1" 00:14:28.489 } 00:14:28.489 ]' 00:14:28.489 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:28.490 /dev/nbd1 00:14:28.490 /dev/nbd10 00:14:28.490 /dev/nbd11 00:14:28.490 /dev/nbd12 00:14:28.490 /dev/nbd13 00:14:28.490 /dev/nbd14' 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:28.490 /dev/nbd1 00:14:28.490 /dev/nbd10 00:14:28.490 /dev/nbd11 00:14:28.490 /dev/nbd12 00:14:28.490 /dev/nbd13 00:14:28.490 /dev/nbd14' 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:28.490 256+0 records in 00:14:28.490 256+0 records out 00:14:28.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00745656 s, 141 MB/s 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:28.490 13:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:28.748 256+0 records in 00:14:28.748 256+0 records out 00:14:28.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147906 s, 7.1 MB/s 00:14:28.748 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:28.748 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:29.006 256+0 records in 00:14:29.006 256+0 records out 00:14:29.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167214 s, 6.3 MB/s 00:14:29.006 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.006 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:29.006 256+0 records in 00:14:29.006 256+0 records out 00:14:29.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149095 s, 7.0 MB/s 00:14:29.006 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.006 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:29.288 256+0 records in 00:14:29.288 256+0 records out 00:14:29.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161367 s, 6.5 MB/s 00:14:29.288 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.288 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:29.288 256+0 records in 00:14:29.288 256+0 records out 00:14:29.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15968 s, 6.6 MB/s 00:14:29.288 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.288 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:29.546 256+0 records in 00:14:29.546 256+0 records out 00:14:29.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150613 s, 7.0 MB/s 00:14:29.546 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.546 13:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:14:29.805 256+0 records in 00:14:29.805 256+0 records out 00:14:29.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143251 s, 7.3 MB/s 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.805 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:30.063 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:30.063 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:30.063 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:30.063 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.063 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.063 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:30.063 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.063 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.063 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.063 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:30.321 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:30.321 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:30.321 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:30.321 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.321 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.321 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:30.321 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.321 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.321 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.321 13:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:30.580 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:30.838 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:30.838 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:30.838 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.838 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.838 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:30.838 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.838 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.838 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.838 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:31.096 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:31.096 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:31.096 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:31.096 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.096 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.096 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:31.096 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:31.096 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.096 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.096 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:31.355 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:31.355 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:31.355 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:31.355 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.355 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.355 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:31.355 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:31.355 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.355 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.355 13:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:31.614 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:31.614 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:31.614 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:31.614 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.614 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.614 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:31.614 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:31.614 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.614 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.614 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:31.873 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:32.440 13:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:32.698 malloc_lvol_verify 00:14:32.698 13:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:32.955 251d1982-f9ee-4abc-a9a0-6b9e27a5e5c5 00:14:32.955 13:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:33.214 a2aab821-2ae9-4048-b440-e672e8474262 00:14:33.484 13:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:33.742 /dev/nbd0 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:33.742 mke2fs 1.47.0 (5-Feb-2023) 00:14:33.742 Discarding device blocks: 0/4096 done 00:14:33.742 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:33.742 00:14:33.742 Allocating group tables: 0/1 done 00:14:33.742 Writing inode tables: 0/1 done 00:14:33.742 Creating journal (1024 blocks): done 00:14:33.742 Writing superblocks and filesystem accounting information: 0/1 done 00:14:33.742 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:33.742 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.743 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63222 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63222 ']' 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63222 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63222 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.001 killing process with pid 63222 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63222' 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63222 00:14:34.001 13:08:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63222 00:14:35.375 ************************************ 00:14:35.375 END TEST bdev_nbd 00:14:35.375 ************************************ 00:14:35.375 13:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:35.375 00:14:35.375 real 0m15.985s 00:14:35.375 user 0m23.332s 00:14:35.375 sys 0m4.989s 00:14:35.375 13:08:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.375 13:08:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:35.375 skipping fio tests on NVMe due to multi-ns failures. 00:14:35.375 13:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:14:35.375 13:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:14:35.375 13:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:14:35.375 13:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:14:35.375 13:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:35.375 13:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:35.375 13:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:35.375 13:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.375 13:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:35.375 ************************************ 00:14:35.375 START TEST bdev_verify 00:14:35.375 ************************************ 00:14:35.375 13:08:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:35.375 [2024-12-06 13:08:41.659777] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:35.375 [2024-12-06 13:08:41.660214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63688 ] 00:14:35.375 [2024-12-06 13:08:41.848825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:35.633 [2024-12-06 13:08:41.975487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.633 [2024-12-06 13:08:41.975492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.197 Running I/O for 5 seconds... 00:14:38.500 21824.00 IOPS, 85.25 MiB/s [2024-12-06T13:08:45.962Z] 19680.00 IOPS, 76.88 MiB/s [2024-12-06T13:08:47.336Z] 18986.67 IOPS, 74.17 MiB/s [2024-12-06T13:08:47.901Z] 18528.00 IOPS, 72.38 MiB/s [2024-12-06T13:08:47.901Z] 18009.60 IOPS, 70.35 MiB/s 00:14:41.373 Latency(us) 00:14:41.373 [2024-12-06T13:08:47.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.373 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x0 length 0xbd0bd 00:14:41.373 Nvme0n1 : 5.05 1291.42 5.04 0.00 0.00 98851.04 20018.27 89128.96 00:14:41.373 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:41.373 Nvme0n1 : 5.09 1256.60 4.91 0.00 0.00 101637.28 19184.17 110577.11 00:14:41.373 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x0 length 0x4ff80 00:14:41.373 Nvme1n1p1 : 5.06 1290.98 5.04 0.00 0.00 98747.51 22163.08 86269.21 00:14:41.373 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x4ff80 length 0x4ff80 00:14:41.373 Nvme1n1p1 : 5.10 1256.13 4.91 0.00 0.00 101452.21 16920.20 110577.11 00:14:41.373 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x0 length 0x4ff7f 00:14:41.373 Nvme1n1p2 : 5.06 1290.57 5.04 0.00 0.00 98606.63 20852.36 84362.71 00:14:41.373 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:14:41.373 Nvme1n1p2 : 5.10 1255.64 4.90 0.00 0.00 101355.90 16920.20 109147.23 00:14:41.373 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x0 length 0x80000 00:14:41.373 Nvme2n1 : 5.06 1290.15 5.04 0.00 0.00 98484.74 20375.74 81026.33 00:14:41.373 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x80000 length 0x80000 00:14:41.373 Nvme2n1 : 5.10 1254.67 4.90 0.00 0.00 101232.02 19660.80 106287.48 00:14:41.373 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x0 length 0x80000 00:14:41.373 Nvme2n2 : 5.06 1289.73 5.04 0.00 0.00 98362.93 19303.33 83886.08 00:14:41.373 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x80000 length 0x80000 00:14:41.373 Nvme2n2 : 5.10 1254.23 4.90 0.00 0.00 101088.63 19899.11 103904.35 00:14:41.373 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x0 length 0x80000 00:14:41.373 Nvme2n3 : 5.06 1289.32 5.04 0.00 0.00 98236.65 18588.39 88175.71 00:14:41.373 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x80000 length 0x80000 00:14:41.373 Nvme2n3 : 5.10 1253.79 4.90 0.00 0.00 100948.91 15371.17 102474.47 00:14:41.373 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x0 length 0x20000 00:14:41.373 Nvme3n1 : 5.07 1299.73 5.08 0.00 0.00 97403.33 2442.71 89605.59 00:14:41.373 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:41.373 Verification LBA range: start 0x20000 length 0x20000 00:14:41.373 Nvme3n1 : 5.11 1253.36 4.90 0.00 0.00 100841.90 12451.84 107240.73 00:14:41.373 [2024-12-06T13:08:47.901Z] =================================================================================================================== 00:14:41.373 [2024-12-06T13:08:47.902Z] Total : 17826.32 69.63 0.00 0.00 99787.81 2442.71 110577.11 00:14:42.778 00:14:42.778 real 0m7.622s 00:14:42.778 user 0m14.083s 00:14:42.778 sys 0m0.265s 00:14:42.778 ************************************ 00:14:42.778 END TEST bdev_verify 00:14:42.778 ************************************ 00:14:42.778 13:08:49 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.778 13:08:49 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:42.778 13:08:49 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:42.778 13:08:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:42.778 13:08:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.778 13:08:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:42.778 ************************************ 00:14:42.778 START TEST bdev_verify_big_io 00:14:42.778 ************************************ 00:14:42.778 13:08:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:43.037 [2024-12-06 13:08:49.335077] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:43.037 [2024-12-06 13:08:49.335265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63792 ] 00:14:43.037 [2024-12-06 13:08:49.519938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:43.295 [2024-12-06 13:08:49.626372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.295 [2024-12-06 13:08:49.626385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.230 Running I/O for 5 seconds... 00:14:48.659 224.00 IOPS, 14.00 MiB/s [2024-12-06T13:08:56.560Z] 1612.00 IOPS, 100.75 MiB/s [2024-12-06T13:08:56.818Z] 2781.33 IOPS, 173.83 MiB/s 00:14:50.290 Latency(us) 00:14:50.290 [2024-12-06T13:08:56.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.290 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x0 length 0xbd0b 00:14:50.290 Nvme0n1 : 5.87 114.43 7.15 0.00 0.00 1068427.77 16801.05 1121023.07 00:14:50.290 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:50.290 Nvme0n1 : 5.94 107.22 6.70 0.00 0.00 1135280.12 20971.52 1212535.16 00:14:50.290 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x0 length 0x4ff8 00:14:50.290 Nvme1n1p1 : 5.81 114.50 7.16 0.00 0.00 1047001.34 90558.84 953250.91 00:14:50.290 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x4ff8 length 0x4ff8 00:14:50.290 Nvme1n1p1 : 5.85 106.29 6.64 0.00 0.00 1107943.46 73876.95 1220161.16 00:14:50.290 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x0 length 0x4ff7 00:14:50.290 Nvme1n1p2 : 6.03 74.27 4.64 0.00 0.00 1567787.09 138221.38 2181038.08 00:14:50.290 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x4ff7 length 0x4ff7 00:14:50.290 Nvme1n1p2 : 5.94 104.98 6.56 0.00 0.00 1099913.30 86269.21 1853119.77 00:14:50.290 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x0 length 0x8000 00:14:50.290 Nvme2n1 : 5.93 116.03 7.25 0.00 0.00 986128.37 134408.38 926559.88 00:14:50.290 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x8000 length 0x8000 00:14:50.290 Nvme2n1 : 6.01 108.40 6.77 0.00 0.00 1036409.26 65774.31 1883623.80 00:14:50.290 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x0 length 0x8000 00:14:50.290 Nvme2n2 : 5.97 124.46 7.78 0.00 0.00 897403.78 51952.17 926559.88 00:14:50.290 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x8000 length 0x8000 00:14:50.290 Nvme2n2 : 6.03 119.32 7.46 0.00 0.00 919736.11 14120.03 1357429.29 00:14:50.290 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x0 length 0x8000 00:14:50.290 Nvme2n3 : 5.97 128.59 8.04 0.00 0.00 846461.52 36938.47 960876.92 00:14:50.290 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x8000 length 0x8000 00:14:50.290 Nvme2n3 : 6.08 119.00 7.44 0.00 0.00 891507.18 21805.61 1967509.88 00:14:50.290 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x0 length 0x2000 00:14:50.290 Nvme3n1 : 6.05 143.24 8.95 0.00 0.00 740613.57 7060.01 999006.95 00:14:50.290 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:50.290 Verification LBA range: start 0x2000 length 0x2000 00:14:50.290 Nvme3n1 : 6.11 138.96 8.68 0.00 0.00 749444.63 2591.65 1998013.91 00:14:50.290 [2024-12-06T13:08:56.818Z] =================================================================================================================== 00:14:50.290 [2024-12-06T13:08:56.818Z] Total : 1619.69 101.23 0.00 0.00 978676.15 2591.65 2181038.08 00:14:52.196 00:14:52.196 real 0m9.089s 00:14:52.196 user 0m16.988s 00:14:52.196 sys 0m0.292s 00:14:52.196 13:08:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.196 ************************************ 00:14:52.196 END TEST bdev_verify_big_io 00:14:52.196 ************************************ 00:14:52.196 13:08:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.196 13:08:58 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:52.196 13:08:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:52.196 13:08:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.196 13:08:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:52.196 ************************************ 00:14:52.196 START TEST bdev_write_zeroes 00:14:52.196 ************************************ 00:14:52.196 13:08:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:52.196 [2024-12-06 13:08:58.447273] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:52.196 [2024-12-06 13:08:58.447618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63912 ] 00:14:52.196 [2024-12-06 13:08:58.616083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.196 [2024-12-06 13:08:58.718266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.132 Running I/O for 1 seconds... 00:14:54.089 50176.00 IOPS, 196.00 MiB/s 00:14:54.089 Latency(us) 00:14:54.089 [2024-12-06T13:09:00.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.089 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:54.089 Nvme0n1 : 1.04 7053.11 27.55 0.00 0.00 18100.47 8936.73 41943.04 00:14:54.089 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:54.089 Nvme1n1p1 : 1.05 7041.94 27.51 0.00 0.00 18093.63 11856.06 40513.16 00:14:54.089 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:54.089 Nvme1n1p2 : 1.05 7030.68 27.46 0.00 0.00 18045.93 8936.73 39321.60 00:14:54.089 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:54.089 Nvme2n1 : 1.05 7020.48 27.42 0.00 0.00 18037.37 8936.73 37653.41 00:14:54.089 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:54.089 Nvme2n2 : 1.05 7010.24 27.38 0.00 0.00 18033.93 9413.35 37415.10 00:14:54.089 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:54.089 Nvme2n3 : 1.05 7000.02 27.34 0.00 0.00 18029.16 9294.20 39559.91 00:14:54.089 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:54.089 Nvme3n1 : 1.05 6989.84 27.30 0.00 0.00 18024.73 8996.31 41466.41 00:14:54.089 [2024-12-06T13:09:00.617Z] =================================================================================================================== 00:14:54.089 [2024-12-06T13:09:00.618Z] Total : 49146.30 191.98 0.00 0.00 18052.17 8936.73 41943.04 00:14:55.024 00:14:55.024 real 0m3.180s 00:14:55.024 user 0m2.836s 00:14:55.024 sys 0m0.219s 00:14:55.024 ************************************ 00:14:55.024 END TEST bdev_write_zeroes 00:14:55.024 ************************************ 00:14:55.024 13:09:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.024 13:09:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:55.283 13:09:01 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:55.283 13:09:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:55.283 13:09:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.283 13:09:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:55.283 ************************************ 00:14:55.283 START TEST bdev_json_nonenclosed 00:14:55.283 ************************************ 00:14:55.283 13:09:01 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:55.283 [2024-12-06 13:09:01.700946] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:55.283 [2024-12-06 13:09:01.701351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63965 ] 00:14:55.542 [2024-12-06 13:09:01.884611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.542 [2024-12-06 13:09:02.008745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.542 [2024-12-06 13:09:02.009111] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:55.542 [2024-12-06 13:09:02.009156] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:55.542 [2024-12-06 13:09:02.009174] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:55.801 00:14:55.801 real 0m0.701s 00:14:55.801 user 0m0.463s 00:14:55.801 sys 0m0.132s 00:14:55.801 13:09:02 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.801 ************************************ 00:14:55.801 END TEST bdev_json_nonenclosed 00:14:55.801 ************************************ 00:14:55.801 13:09:02 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:56.059 13:09:02 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:56.059 13:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:56.059 13:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.059 13:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:56.059 ************************************ 00:14:56.059 START TEST bdev_json_nonarray 00:14:56.059 ************************************ 00:14:56.059 13:09:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:56.059 [2024-12-06 13:09:02.456566] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:56.059 [2024-12-06 13:09:02.457055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63985 ] 00:14:56.318 [2024-12-06 13:09:02.644180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.318 [2024-12-06 13:09:02.770656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.318 [2024-12-06 13:09:02.771071] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:56.318 [2024-12-06 13:09:02.771139] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:56.318 [2024-12-06 13:09:02.771160] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:56.576 00:14:56.576 real 0m0.726s 00:14:56.576 user 0m0.496s 00:14:56.576 sys 0m0.123s 00:14:56.576 13:09:03 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.576 13:09:03 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:56.576 ************************************ 00:14:56.576 END TEST bdev_json_nonarray 00:14:56.576 ************************************ 00:14:56.833 13:09:03 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:14:56.834 13:09:03 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:14:56.834 13:09:03 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:14:56.834 13:09:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:56.834 13:09:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.834 13:09:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:56.834 ************************************ 00:14:56.834 START TEST bdev_gpt_uuid 00:14:56.834 ************************************ 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64016 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:56.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64016 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64016 ']' 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.834 13:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:56.834 [2024-12-06 13:09:03.263073] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:56.834 [2024-12-06 13:09:03.263903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64016 ] 00:14:57.092 [2024-12-06 13:09:03.438093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.092 [2024-12-06 13:09:03.564647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.027 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.027 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:14:58.027 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:58.027 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.027 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 Some configs were skipped because the RPC state that can call them passed over. 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:14:58.286 { 00:14:58.286 "name": "Nvme1n1p1", 00:14:58.286 "aliases": [ 00:14:58.286 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:14:58.286 ], 00:14:58.286 "product_name": "GPT Disk", 00:14:58.286 "block_size": 4096, 00:14:58.286 "num_blocks": 655104, 00:14:58.286 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:58.286 "assigned_rate_limits": { 00:14:58.286 "rw_ios_per_sec": 0, 00:14:58.286 "rw_mbytes_per_sec": 0, 00:14:58.286 "r_mbytes_per_sec": 0, 00:14:58.286 "w_mbytes_per_sec": 0 00:14:58.286 }, 00:14:58.286 "claimed": false, 00:14:58.286 "zoned": false, 00:14:58.286 "supported_io_types": { 00:14:58.286 "read": true, 00:14:58.286 "write": true, 00:14:58.286 "unmap": true, 00:14:58.286 "flush": true, 00:14:58.286 "reset": true, 00:14:58.286 "nvme_admin": false, 00:14:58.286 "nvme_io": false, 00:14:58.286 "nvme_io_md": false, 00:14:58.286 "write_zeroes": true, 00:14:58.286 "zcopy": false, 00:14:58.286 "get_zone_info": false, 00:14:58.286 "zone_management": false, 00:14:58.286 "zone_append": false, 00:14:58.286 "compare": true, 00:14:58.286 "compare_and_write": false, 00:14:58.286 "abort": true, 00:14:58.286 "seek_hole": false, 00:14:58.286 "seek_data": false, 00:14:58.286 "copy": true, 00:14:58.286 "nvme_iov_md": false 00:14:58.286 }, 00:14:58.286 "driver_specific": { 00:14:58.286 "gpt": { 00:14:58.286 "base_bdev": "Nvme1n1", 00:14:58.286 "offset_blocks": 256, 00:14:58.286 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:14:58.286 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:58.286 "partition_name": "SPDK_TEST_first" 00:14:58.286 } 00:14:58.286 } 00:14:58.286 } 00:14:58.286 ]' 00:14:58.286 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:14:58.544 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:14:58.544 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:14:58.544 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:58.544 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:58.544 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:58.544 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:14:58.544 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.544 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:58.544 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.544 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:14:58.544 { 00:14:58.544 "name": "Nvme1n1p2", 00:14:58.544 "aliases": [ 00:14:58.544 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:14:58.544 ], 00:14:58.544 "product_name": "GPT Disk", 00:14:58.544 "block_size": 4096, 00:14:58.544 "num_blocks": 655103, 00:14:58.544 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:58.544 "assigned_rate_limits": { 00:14:58.545 "rw_ios_per_sec": 0, 00:14:58.545 "rw_mbytes_per_sec": 0, 00:14:58.545 "r_mbytes_per_sec": 0, 00:14:58.545 "w_mbytes_per_sec": 0 00:14:58.545 }, 00:14:58.545 "claimed": false, 00:14:58.545 "zoned": false, 00:14:58.545 "supported_io_types": { 00:14:58.545 "read": true, 00:14:58.545 "write": true, 00:14:58.545 "unmap": true, 00:14:58.545 "flush": true, 00:14:58.545 "reset": true, 00:14:58.545 "nvme_admin": false, 00:14:58.545 "nvme_io": false, 00:14:58.545 "nvme_io_md": false, 00:14:58.545 "write_zeroes": true, 00:14:58.545 "zcopy": false, 00:14:58.545 "get_zone_info": false, 00:14:58.545 "zone_management": false, 00:14:58.545 "zone_append": false, 00:14:58.545 "compare": true, 00:14:58.545 "compare_and_write": false, 00:14:58.545 "abort": true, 00:14:58.545 "seek_hole": false, 00:14:58.545 "seek_data": false, 00:14:58.545 "copy": true, 00:14:58.545 "nvme_iov_md": false 00:14:58.545 }, 00:14:58.545 "driver_specific": { 00:14:58.545 "gpt": { 00:14:58.545 "base_bdev": "Nvme1n1", 00:14:58.545 "offset_blocks": 655360, 00:14:58.545 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:14:58.545 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:58.545 "partition_name": "SPDK_TEST_second" 00:14:58.545 } 00:14:58.545 } 00:14:58.545 } 00:14:58.545 ]' 00:14:58.545 13:09:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:14:58.545 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:14:58.545 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 64016 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64016 ']' 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64016 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64016 00:14:58.803 killing process with pid 64016 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64016' 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64016 00:14:58.803 13:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64016 00:15:00.702 ************************************ 00:15:00.702 END TEST bdev_gpt_uuid 00:15:00.702 ************************************ 00:15:00.702 00:15:00.702 real 0m4.090s 00:15:00.702 user 0m4.544s 00:15:00.702 sys 0m0.468s 00:15:00.702 13:09:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.702 13:09:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:00.961 13:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:15:00.961 13:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:00.961 13:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:15:00.961 13:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:00.961 13:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:00.961 13:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:15:00.961 13:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:15:00.961 13:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:15:00.961 13:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:01.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:01.477 Waiting for block devices as requested 00:15:01.477 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:01.477 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:01.477 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:01.734 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:07.046 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:07.046 13:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:15:07.046 13:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:15:07.046 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:15:07.046 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:15:07.046 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:15:07.046 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:15:07.046 13:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:15:07.046 00:15:07.046 real 1m6.269s 00:15:07.046 user 1m26.446s 00:15:07.046 sys 0m10.178s 00:15:07.046 13:09:13 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.046 13:09:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:07.046 ************************************ 00:15:07.046 END TEST blockdev_nvme_gpt 00:15:07.046 ************************************ 00:15:07.046 13:09:13 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:15:07.046 13:09:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:07.046 13:09:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.046 13:09:13 -- common/autotest_common.sh@10 -- # set +x 00:15:07.046 ************************************ 00:15:07.046 START TEST nvme 00:15:07.046 ************************************ 00:15:07.046 13:09:13 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:15:07.046 * Looking for test storage... 00:15:07.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:07.046 13:09:13 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:07.046 13:09:13 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:07.046 13:09:13 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:07.302 13:09:13 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:07.302 13:09:13 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.302 13:09:13 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.302 13:09:13 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.302 13:09:13 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.302 13:09:13 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.302 13:09:13 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.302 13:09:13 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.302 13:09:13 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.302 13:09:13 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.302 13:09:13 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.302 13:09:13 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.302 13:09:13 nvme -- scripts/common.sh@344 -- # case "$op" in 00:15:07.302 13:09:13 nvme -- scripts/common.sh@345 -- # : 1 00:15:07.302 13:09:13 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.302 13:09:13 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.302 13:09:13 nvme -- scripts/common.sh@365 -- # decimal 1 00:15:07.302 13:09:13 nvme -- scripts/common.sh@353 -- # local d=1 00:15:07.302 13:09:13 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.302 13:09:13 nvme -- scripts/common.sh@355 -- # echo 1 00:15:07.302 13:09:13 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.302 13:09:13 nvme -- scripts/common.sh@366 -- # decimal 2 00:15:07.302 13:09:13 nvme -- scripts/common.sh@353 -- # local d=2 00:15:07.302 13:09:13 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.302 13:09:13 nvme -- scripts/common.sh@355 -- # echo 2 00:15:07.302 13:09:13 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.302 13:09:13 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.302 13:09:13 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.302 13:09:13 nvme -- scripts/common.sh@368 -- # return 0 00:15:07.302 13:09:13 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.302 13:09:13 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:07.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.302 --rc genhtml_branch_coverage=1 00:15:07.302 --rc genhtml_function_coverage=1 00:15:07.302 --rc genhtml_legend=1 00:15:07.302 --rc geninfo_all_blocks=1 00:15:07.302 --rc geninfo_unexecuted_blocks=1 00:15:07.302 00:15:07.302 ' 00:15:07.302 13:09:13 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:07.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.302 --rc genhtml_branch_coverage=1 00:15:07.302 --rc genhtml_function_coverage=1 00:15:07.302 --rc genhtml_legend=1 00:15:07.302 --rc geninfo_all_blocks=1 00:15:07.302 --rc geninfo_unexecuted_blocks=1 00:15:07.302 00:15:07.302 ' 00:15:07.302 13:09:13 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:07.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.302 --rc genhtml_branch_coverage=1 00:15:07.302 --rc genhtml_function_coverage=1 00:15:07.302 --rc genhtml_legend=1 00:15:07.302 --rc geninfo_all_blocks=1 00:15:07.303 --rc geninfo_unexecuted_blocks=1 00:15:07.303 00:15:07.303 ' 00:15:07.303 13:09:13 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:07.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.303 --rc genhtml_branch_coverage=1 00:15:07.303 --rc genhtml_function_coverage=1 00:15:07.303 --rc genhtml_legend=1 00:15:07.303 --rc geninfo_all_blocks=1 00:15:07.303 --rc geninfo_unexecuted_blocks=1 00:15:07.303 00:15:07.303 ' 00:15:07.303 13:09:13 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:07.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:08.434 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:08.434 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:08.434 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:08.434 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:08.434 13:09:14 nvme -- nvme/nvme.sh@79 -- # uname 00:15:08.434 13:09:14 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:15:08.434 13:09:14 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:15:08.434 13:09:14 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:15:08.434 13:09:14 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:15:08.434 13:09:14 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:15:08.434 13:09:14 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:15:08.434 13:09:14 nvme -- common/autotest_common.sh@1075 -- # stubpid=64671 00:15:08.434 Waiting for stub to ready for secondary processes... 00:15:08.434 13:09:14 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:15:08.434 13:09:14 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:15:08.434 13:09:14 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:15:08.434 13:09:14 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64671 ]] 00:15:08.434 13:09:14 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:15:08.434 [2024-12-06 13:09:14.921114] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:08.434 [2024-12-06 13:09:14.921326] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:15:09.368 [2024-12-06 13:09:15.754083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.368 13:09:15 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:15:09.368 13:09:15 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64671 ]] 00:15:09.368 13:09:15 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:15:09.368 [2024-12-06 13:09:15.877409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.368 [2024-12-06 13:09:15.877482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.368 [2024-12-06 13:09:15.877484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.627 [2024-12-06 13:09:15.899755] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:15:09.627 [2024-12-06 13:09:15.899825] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:15:09.627 [2024-12-06 13:09:15.912395] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:15:09.627 [2024-12-06 13:09:15.912560] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:15:09.627 [2024-12-06 13:09:15.915158] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:15:09.627 [2024-12-06 13:09:15.915449] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:15:09.627 [2024-12-06 13:09:15.915534] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:15:09.627 [2024-12-06 13:09:15.917979] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:15:09.627 [2024-12-06 13:09:15.918181] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:15:09.627 [2024-12-06 13:09:15.918275] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:15:09.627 [2024-12-06 13:09:15.921060] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:15:09.627 [2024-12-06 13:09:15.921273] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:15:09.627 [2024-12-06 13:09:15.921361] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:15:09.627 [2024-12-06 13:09:15.921424] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:15:09.627 [2024-12-06 13:09:15.921475] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:15:10.562 done. 00:15:10.562 13:09:16 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:15:10.562 13:09:16 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:15:10.562 13:09:16 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:15:10.562 13:09:16 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:15:10.562 13:09:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.562 13:09:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.562 ************************************ 00:15:10.562 START TEST nvme_reset 00:15:10.562 ************************************ 00:15:10.562 13:09:16 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:15:10.820 Initializing NVMe Controllers 00:15:10.820 Skipping QEMU NVMe SSD at 0000:00:10.0 00:15:10.820 Skipping QEMU NVMe SSD at 0000:00:11.0 00:15:10.820 Skipping QEMU NVMe SSD at 0000:00:13.0 00:15:10.820 Skipping QEMU NVMe SSD at 0000:00:12.0 00:15:10.820 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:15:10.820 00:15:10.820 real 0m0.336s 00:15:10.820 user 0m0.138s 00:15:10.820 sys 0m0.152s 00:15:10.820 ************************************ 00:15:10.820 END TEST nvme_reset 00:15:10.820 ************************************ 00:15:10.820 13:09:17 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.820 13:09:17 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:15:10.820 13:09:17 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:15:10.820 13:09:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:10.820 13:09:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.820 13:09:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.820 ************************************ 00:15:10.820 START TEST nvme_identify 00:15:10.820 ************************************ 00:15:10.820 13:09:17 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:15:10.820 13:09:17 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:15:10.820 13:09:17 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:15:10.820 13:09:17 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:15:10.820 13:09:17 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:15:10.820 13:09:17 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:10.820 13:09:17 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:15:10.820 13:09:17 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:10.820 13:09:17 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:10.820 13:09:17 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:10.820 13:09:17 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:10.821 13:09:17 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:10.821 13:09:17 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:15:11.390 [2024-12-06 13:09:17.626260] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64704 termina===================================================== 00:15:11.390 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:11.390 ===================================================== 00:15:11.390 Controller Capabilities/Features 00:15:11.390 ================================ 00:15:11.390 Vendor ID: 1b36 00:15:11.390 Subsystem Vendor ID: 1af4 00:15:11.390 Serial Number: 12340 00:15:11.390 Model Number: QEMU NVMe Ctrl 00:15:11.390 Firmware Version: 8.0.0 00:15:11.390 Recommended Arb Burst: 6 00:15:11.390 IEEE OUI Identifier: 00 54 52 00:15:11.390 Multi-path I/O 00:15:11.390 May have multiple subsystem ports: No 00:15:11.390 May have multiple controllers: No 00:15:11.390 Associated with SR-IOV VF: No 00:15:11.390 Max Data Transfer Size: 524288 00:15:11.390 Max Number of Namespaces: 256 00:15:11.390 Max Number of I/O Queues: 64 00:15:11.390 NVMe Specification Version (VS): 1.4 00:15:11.390 NVMe Specification Version (Identify): 1.4 00:15:11.390 Maximum Queue Entries: 2048 00:15:11.390 Contiguous Queues Required: Yes 00:15:11.390 Arbitration Mechanisms Supported 00:15:11.390 Weighted Round Robin: Not Supported 00:15:11.390 Vendor Specific: Not Supported 00:15:11.390 Reset Timeout: 7500 ms 00:15:11.390 Doorbell Stride: 4 bytes 00:15:11.390 NVM Subsystem Reset: Not Supported 00:15:11.390 Command Sets Supported 00:15:11.390 NVM Command Set: Supported 00:15:11.390 Boot Partition: Not Supported 00:15:11.390 Memory Page Size Minimum: 4096 bytes 00:15:11.390 Memory Page Size Maximum: 65536 bytes 00:15:11.390 Persistent Memory Region: Not Supported 00:15:11.390 Optional Asynchronous Events Supported 00:15:11.390 Namespace Attribute Notices: Supported 00:15:11.390 Firmware Activation Notices: Not Supported 00:15:11.390 ANA Change Notices: Not Supported 00:15:11.390 PLE Aggregate Log Change Notices: Not Supported 00:15:11.390 LBA Status Info Alert Notices: Not Supported 00:15:11.390 EGE Aggregate Log Change Notices: Not Supported 00:15:11.390 Normal NVM Subsystem Shutdown event: Not Supported 00:15:11.390 Zone Descriptor Change Notices: Not Supported 00:15:11.390 Discovery Log Change Notices: Not Supported 00:15:11.390 Controller Attributes 00:15:11.390 128-bit Host Identifier: Not Supported 00:15:11.390 Non-Operational Permissive Mode: Not Supported 00:15:11.390 NVM Sets: Not Supported 00:15:11.390 Read Recovery Levels: Not Supported 00:15:11.390 Endurance Groups: Not Supported 00:15:11.390 Predictable Latency Mode: Not Supported 00:15:11.390 Traffic Based Keep ALive: Not Supported 00:15:11.390 Namespace Granularity: Not Supported 00:15:11.391 SQ Associations: Not Supported 00:15:11.391 UUID List: Not Supported 00:15:11.391 Multi-Domain Subsystem: Not Supported 00:15:11.391 Fixed Capacity Management: Not Supported 00:15:11.391 Variable Capacity Management: Not Supported 00:15:11.391 Delete Endurance Group: Not Supported 00:15:11.391 Delete NVM Set: Not Supported 00:15:11.391 Extended LBA Formats Supported: Supported 00:15:11.391 Flexible Data Placement Supported: Not Supported 00:15:11.391 00:15:11.391 Controller Memory Buffer Support 00:15:11.391 ================================ 00:15:11.391 Supported: No 00:15:11.391 00:15:11.391 Persistent Memory Region Support 00:15:11.391 ================================ 00:15:11.391 Supported: No 00:15:11.391 00:15:11.391 Admin Command Set Attributes 00:15:11.391 ============================ 00:15:11.391 Security Send/Receive: Not Supported 00:15:11.391 Format NVM: Supported 00:15:11.391 Firmware Activate/Download: Not Supported 00:15:11.391 Namespace Management: Supported 00:15:11.391 Device Self-Test: Not Supported 00:15:11.391 Directives: Supported 00:15:11.391 NVMe-MI: Not Supported 00:15:11.391 Virtualization Management: Not Supported 00:15:11.391 Doorbell Buffer Config: Supported 00:15:11.391 Get LBA Status Capability: Not Supported 00:15:11.391 Command & Feature Lockdown Capability: Not Supported 00:15:11.391 Abort Command Limit: 4 00:15:11.391 Async Event Request Limit: 4 00:15:11.391 Number of Firmware Slots: N/A 00:15:11.391 Firmware Slot 1 Read-Only: N/A 00:15:11.391 Firmware Activation Without Reset: N/A 00:15:11.391 Multiple Update Detection Support: N/A 00:15:11.391 Firmware Update Granularity: No Information Provided 00:15:11.391 Per-Namespace SMART Log: Yes 00:15:11.391 Asymmetric Namespace Access Log Page: Not Supported 00:15:11.391 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:15:11.391 Command Effects Log Page: Supported 00:15:11.391 Get Log Page Extended Data: Supported 00:15:11.391 Telemetry Log Pages: Not Supported 00:15:11.391 Persistent Event Log Pages: Not Supported 00:15:11.391 Supported Log Pages Log Page: May Support 00:15:11.391 Commands Supported & Effects Log Page: Not Supported 00:15:11.391 Feature Identifiers & Effects Log Page:May Support 00:15:11.391 NVMe-MI Commands & Effects Log Page: May Support 00:15:11.391 Data Area 4 for Telemetry Log: Not Supported 00:15:11.391 Error Log Page Entries Supported: 1 00:15:11.391 Keep Alive: Not Supported 00:15:11.391 00:15:11.391 NVM Command Set Attributes 00:15:11.391 ========================== 00:15:11.391 Submission Queue Entry Size 00:15:11.391 Max: 64 00:15:11.391 Min: 64 00:15:11.391 Completion Queue Entry Size 00:15:11.391 Max: 16 00:15:11.391 Min: 16 00:15:11.391 Number of Namespaces: 256 00:15:11.391 Compare Command: Supported 00:15:11.391 Write Uncorrectable Command: Not Supported 00:15:11.391 Dataset Management Command: Supported 00:15:11.391 Write Zeroes Command: Supported 00:15:11.391 Set Features Save Field: Supported 00:15:11.391 Reservations: Not Supported 00:15:11.391 Timestamp: Supported 00:15:11.391 Copy: Supported 00:15:11.391 Volatile Write Cache: Present 00:15:11.391 Atomic Write Unit (Normal): 1 00:15:11.391 Atomic Write Unit (PFail): 1 00:15:11.391 Atomic Compare & Write Unit: 1 00:15:11.391 Fused Compare & Write: Not Supported 00:15:11.391 Scatter-Gather List 00:15:11.391 SGL Command Set: Supported 00:15:11.391 SGL Keyed: Not Supported 00:15:11.391 SGL Bit Bucket Descriptor: Not Supported 00:15:11.391 SGL Metadata Pointer: Not Supported 00:15:11.391 Oversized SGL: Not Supported 00:15:11.391 SGL Metadata Address: Not Supported 00:15:11.391 SGL Offset: Not Supported 00:15:11.391 Transport SGL Data Block: Not Supported 00:15:11.391 Replay Protected Memory Block: Not Supported 00:15:11.391 00:15:11.391 Firmware Slot Information 00:15:11.391 ========================= 00:15:11.391 Active slot: 1 00:15:11.391 Slot 1 Firmware Revision: 1.0 00:15:11.391 00:15:11.391 00:15:11.391 Commands Supported and Effects 00:15:11.391 ============================== 00:15:11.391 Admin Commands 00:15:11.391 -------------- 00:15:11.391 Delete I/O Submission Queue (00h): Supported 00:15:11.391 Create I/O Submission Queue (01h): Supported 00:15:11.391 Get Log Page (02h): Supported 00:15:11.391 Delete I/O Completion Queue (04h): Supported 00:15:11.391 Create I/O Completion Queue (05h): Supported 00:15:11.391 Identify (06h): Supported 00:15:11.391 Abort (08h): Supported 00:15:11.391 Set Features (09h): Supported 00:15:11.391 Get Features (0Ah): Supported 00:15:11.391 Asynchronous Event Request (0Ch): Supported 00:15:11.391 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:11.391 Directive Send (19h): Supported 00:15:11.391 Directive Receive (1Ah): Supported 00:15:11.391 Virtualization Management (1Ch): Supported 00:15:11.391 Doorbell Buffer Config (7Ch): Supported 00:15:11.391 Format NVM (80h): Supported LBA-Change 00:15:11.391 I/O Commands 00:15:11.391 ------------ 00:15:11.391 Flush (00h): Supported LBA-Change 00:15:11.391 Write (01h): Supported LBA-Change 00:15:11.391 Read (02h): Supported 00:15:11.391 Compare (05h): Supported 00:15:11.391 Write Zeroes (08h): Supported LBA-Change 00:15:11.391 Dataset Management (09h): Supported LBA-Change 00:15:11.391 Unknown (0Ch): Supported 00:15:11.391 Unknown (12h): Supported 00:15:11.391 Copy (19h): Supported LBA-Change 00:15:11.391 Unknown (1Dh): Supported LBA-Change 00:15:11.391 00:15:11.391 Error Log 00:15:11.391 ========= 00:15:11.391 00:15:11.391 Arbitration 00:15:11.391 =========== 00:15:11.391 Arbitration Burst: no limit 00:15:11.391 00:15:11.391 Power Management 00:15:11.391 ================ 00:15:11.391 Number of Power States: 1 00:15:11.391 Current Power State: Power State #0 00:15:11.391 Power State #0: 00:15:11.391 Max Power: 25.00 W 00:15:11.391 Non-Operational State: Operational 00:15:11.391 Entry Latency: 16 microseconds 00:15:11.391 Exit Latency: 4 microseconds 00:15:11.391 Relative Read Throughput: 0 00:15:11.391 Relative Read Latency: 0 00:15:11.391 Relative Write Throughput: 0 00:15:11.391 Relative Write Latency: 0 00:15:11.391 Idle Powerted unexpected 00:15:11.391 [2024-12-06 13:09:17.627731] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64704 terminated unexpected 00:15:11.391 : Not Reported 00:15:11.391 Active Power: Not Reported 00:15:11.391 Non-Operational Permissive Mode: Not Supported 00:15:11.391 00:15:11.391 Health Information 00:15:11.391 ================== 00:15:11.391 Critical Warnings: 00:15:11.391 Available Spare Space: OK 00:15:11.391 Temperature: OK 00:15:11.391 Device Reliability: OK 00:15:11.391 Read Only: No 00:15:11.391 Volatile Memory Backup: OK 00:15:11.391 Current Temperature: 323 Kelvin (50 Celsius) 00:15:11.391 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:11.391 Available Spare: 0% 00:15:11.391 Available Spare Threshold: 0% 00:15:11.391 Life Percentage Used: 0% 00:15:11.391 Data Units Read: 656 00:15:11.391 Data Units Written: 584 00:15:11.391 Host Read Commands: 32744 00:15:11.391 Host Write Commands: 32530 00:15:11.391 Controller Busy Time: 0 minutes 00:15:11.391 Power Cycles: 0 00:15:11.391 Power On Hours: 0 hours 00:15:11.391 Unsafe Shutdowns: 0 00:15:11.391 Unrecoverable Media Errors: 0 00:15:11.391 Lifetime Error Log Entries: 0 00:15:11.391 Warning Temperature Time: 0 minutes 00:15:11.391 Critical Temperature Time: 0 minutes 00:15:11.391 00:15:11.391 Number of Queues 00:15:11.391 ================ 00:15:11.391 Number of I/O Submission Queues: 64 00:15:11.391 Number of I/O Completion Queues: 64 00:15:11.391 00:15:11.391 ZNS Specific Controller Data 00:15:11.391 ============================ 00:15:11.391 Zone Append Size Limit: 0 00:15:11.391 00:15:11.391 00:15:11.391 Active Namespaces 00:15:11.391 ================= 00:15:11.391 Namespace ID:1 00:15:11.391 Error Recovery Timeout: Unlimited 00:15:11.391 Command Set Identifier: NVM (00h) 00:15:11.391 Deallocate: Supported 00:15:11.391 Deallocated/Unwritten Error: Supported 00:15:11.391 Deallocated Read Value: All 0x00 00:15:11.391 Deallocate in Write Zeroes: Not Supported 00:15:11.391 Deallocated Guard Field: 0xFFFF 00:15:11.391 Flush: Supported 00:15:11.391 Reservation: Not Supported 00:15:11.391 Metadata Transferred as: Separate Metadata Buffer 00:15:11.391 Namespace Sharing Capabilities: Private 00:15:11.391 Size (in LBAs): 1548666 (5GiB) 00:15:11.391 Capacity (in LBAs): 1548666 (5GiB) 00:15:11.391 Utilization (in LBAs): 1548666 (5GiB) 00:15:11.391 Thin Provisioning: Not Supported 00:15:11.391 Per-NS Atomic Units: No 00:15:11.391 Maximum Single Source Range Length: 128 00:15:11.392 Maximum Copy Length: 128 00:15:11.392 Maximum Source Range Count: 128 00:15:11.392 NGUID/EUI64 Never Reused: No 00:15:11.392 Namespace Write Protected: No 00:15:11.392 Number of LBA Formats: 8 00:15:11.392 Current LBA Format: LBA Format #07 00:15:11.392 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.392 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:11.392 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:11.392 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:11.392 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:11.392 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:11.392 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:11.392 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:11.392 00:15:11.392 NVM Specific Namespace Data 00:15:11.392 =========================== 00:15:11.392 Logical Block Storage Tag Mask: 0 00:15:11.392 Protection Information Capabilities: 00:15:11.392 16b Guard Protection Information Storage Tag Support: No 00:15:11.392 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:11.392 Storage Tag Check Read Support: No 00:15:11.392 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.392 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.392 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.392 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.392 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.392 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.392 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.392 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.392 ===================================================== 00:15:11.392 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:11.392 ===================================================== 00:15:11.392 Controller Capabilities/Features 00:15:11.392 ================================ 00:15:11.392 Vendor ID: 1b36 00:15:11.392 Subsystem Vendor ID: 1af4 00:15:11.392 Serial Number: 12341 00:15:11.392 Model Number: QEMU NVMe Ctrl 00:15:11.392 Firmware Version: 8.0.0 00:15:11.392 Recommended Arb Burst: 6 00:15:11.392 IEEE OUI Identifier: 00 54 52 00:15:11.392 Multi-path I/O 00:15:11.392 May have multiple subsystem ports: No 00:15:11.392 May have multiple controllers: No 00:15:11.392 Associated with SR-IOV VF: No 00:15:11.392 Max Data Transfer Size: 524288 00:15:11.392 Max Number of Namespaces: 256 00:15:11.392 Max Number of I/O Queues: 64 00:15:11.392 NVMe Specification Version (VS): 1.4 00:15:11.392 NVMe Specification Version (Identify): 1.4 00:15:11.392 Maximum Queue Entries: 2048 00:15:11.392 Contiguous Queues Required: Yes 00:15:11.392 Arbitration Mechanisms Supported 00:15:11.392 Weighted Round Robin: Not Supported 00:15:11.392 Vendor Specific: Not Supported 00:15:11.392 Reset Timeout: 7500 ms 00:15:11.392 Doorbell Stride: 4 bytes 00:15:11.392 NVM Subsystem Reset: Not Supported 00:15:11.392 Command Sets Supported 00:15:11.392 NVM Command Set: Supported 00:15:11.392 Boot Partition: Not Supported 00:15:11.392 Memory Page Size Minimum: 4096 bytes 00:15:11.392 Memory Page Size Maximum: 65536 bytes 00:15:11.392 Persistent Memory Region: Not Supported 00:15:11.392 Optional Asynchronous Events Supported 00:15:11.392 Namespace Attribute Notices: Supported 00:15:11.392 Firmware Activation Notices: Not Supported 00:15:11.392 ANA Change Notices: Not Supported 00:15:11.392 PLE Aggregate Log Change Notices: Not Supported 00:15:11.392 LBA Status Info Alert Notices: Not Supported 00:15:11.392 EGE Aggregate Log Change Notices: Not Supported 00:15:11.392 Normal NVM Subsystem Shutdown event: Not Supported 00:15:11.392 Zone Descriptor Change Notices: Not Supported 00:15:11.392 Discovery Log Change Notices: Not Supported 00:15:11.392 Controller Attributes 00:15:11.392 128-bit Host Identifier: Not Supported 00:15:11.392 Non-Operational Permissive Mode: Not Supported 00:15:11.392 NVM Sets: Not Supported 00:15:11.392 Read Recovery Levels: Not Supported 00:15:11.392 Endurance Groups: Not Supported 00:15:11.392 Predictable Latency Mode: Not Supported 00:15:11.392 Traffic Based Keep ALive: Not Supported 00:15:11.392 Namespace Granularity: Not Supported 00:15:11.392 SQ Associations: Not Supported 00:15:11.392 UUID List: Not Supported 00:15:11.392 Multi-Domain Subsystem: Not Supported 00:15:11.392 Fixed Capacity Management: Not Supported 00:15:11.392 Variable Capacity Management: Not Supported 00:15:11.392 Delete Endurance Group: Not Supported 00:15:11.392 Delete NVM Set: Not Supported 00:15:11.392 Extended LBA Formats Supported: Supported 00:15:11.392 Flexible Data Placement Supported: Not Supported 00:15:11.392 00:15:11.392 Controller Memory Buffer Support 00:15:11.392 ================================ 00:15:11.392 Supported: No 00:15:11.392 00:15:11.392 Persistent Memory Region Support 00:15:11.392 ================================ 00:15:11.392 Supported: No 00:15:11.392 00:15:11.392 Admin Command Set Attributes 00:15:11.392 ============================ 00:15:11.392 Security Send/Receive: Not Supported 00:15:11.392 Format NVM: Supported 00:15:11.392 Firmware Activate/Download: Not Supported 00:15:11.392 Namespace Management: Supported 00:15:11.392 Device Self-Test: Not Supported 00:15:11.392 Directives: Supported 00:15:11.392 NVMe-MI: Not Supported 00:15:11.392 Virtualization Management: Not Supported 00:15:11.392 Doorbell Buffer Config: Supported 00:15:11.392 Get LBA Status Capability: Not Supported 00:15:11.392 Command & Feature Lockdown Capability: Not Supported 00:15:11.392 Abort Command Limit: 4 00:15:11.392 Async Event Request Limit: 4 00:15:11.392 Number of Firmware Slots: N/A 00:15:11.392 Firmware Slot 1 Read-Only: N/A 00:15:11.392 Firmware Activation Without Reset: N/A 00:15:11.392 Multiple Update Detection Support: N/A 00:15:11.392 Firmware Update Granularity: No Information Provided 00:15:11.392 Per-Namespace SMART Log: Yes 00:15:11.392 Asymmetric Namespace Access Log Page: Not Supported 00:15:11.392 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:15:11.392 Command Effects Log Page: Supported 00:15:11.392 Get Log Page Extended Data: Supported 00:15:11.392 Telemetry Log Pages: Not Supported 00:15:11.392 Persistent Event Log Pages: Not Supported 00:15:11.392 Supported Log Pages Log Page: May Support 00:15:11.392 Commands Supported & Effects Log Page: Not Supported 00:15:11.392 Feature Identifiers & Effects Log Page:May Support 00:15:11.392 NVMe-MI Commands & Effects Log Page: May Support 00:15:11.392 Data Area 4 for Telemetry Log: Not Supported 00:15:11.392 Error Log Page Entries Supported: 1 00:15:11.392 Keep Alive: Not Supported 00:15:11.392 00:15:11.392 NVM Command Set Attributes 00:15:11.392 ========================== 00:15:11.392 Submission Queue Entry Size 00:15:11.392 Max: 64 00:15:11.392 Min: 64 00:15:11.392 Completion Queue Entry Size 00:15:11.392 Max: 16 00:15:11.392 Min: 16 00:15:11.392 Number of Namespaces: 256 00:15:11.392 Compare Command: Supported 00:15:11.392 Write Uncorrectable Command: Not Supported 00:15:11.392 Dataset Management Command: Supported 00:15:11.392 Write Zeroes Command: Supported 00:15:11.392 Set Features Save Field: Supported 00:15:11.392 Reservations: Not Supported 00:15:11.392 Timestamp: Supported 00:15:11.392 Copy: Supported 00:15:11.392 Volatile Write Cache: Present 00:15:11.392 Atomic Write Unit (Normal): 1 00:15:11.392 Atomic Write Unit (PFail): 1 00:15:11.392 Atomic Compare & Write Unit: 1 00:15:11.392 Fused Compare & Write: Not Supported 00:15:11.392 Scatter-Gather List 00:15:11.392 SGL Command Set: Supported 00:15:11.392 SGL Keyed: Not Supported 00:15:11.392 SGL Bit Bucket Descriptor: Not Supported 00:15:11.392 SGL Metadata Pointer: Not Supported 00:15:11.392 Oversized SGL: Not Supported 00:15:11.392 SGL Metadata Address: Not Supported 00:15:11.392 SGL Offset: Not Supported 00:15:11.392 Transport SGL Data Block: Not Supported 00:15:11.392 Replay Protected Memory Block: Not Supported 00:15:11.392 00:15:11.392 Firmware Slot Information 00:15:11.392 ========================= 00:15:11.392 Active slot: 1 00:15:11.392 Slot 1 Firmware Revision: 1.0 00:15:11.392 00:15:11.392 00:15:11.392 Commands Supported and Effects 00:15:11.392 ============================== 00:15:11.392 Admin Commands 00:15:11.392 -------------- 00:15:11.392 Delete I/O Submission Queue (00h): Supported 00:15:11.392 Create I/O Submission Queue (01h): Supported 00:15:11.392 Get Log Page (02h): Supported 00:15:11.392 Delete I/O Completion Queue (04h): Supported 00:15:11.392 Create I/O Completion Queue (05h): Supported 00:15:11.392 Identify (06h): Supported 00:15:11.392 Abort (08h): Supported 00:15:11.392 Set Features (09h): Supported 00:15:11.392 Get Features (0Ah): Supported 00:15:11.393 Asynchronous Event Request (0Ch): Supported 00:15:11.393 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:11.393 Directive Send (19h): Supported 00:15:11.393 Directive Receive (1Ah): Supported 00:15:11.393 Virtualization Management (1Ch): Supported 00:15:11.393 Doorbell Buffer Config (7Ch): Supported 00:15:11.393 Format NVM (80h): Supported LBA-Change 00:15:11.393 I/O Commands 00:15:11.393 ------------ 00:15:11.393 Flush (00h): Supported LBA-Change 00:15:11.393 Write (01h): Supported LBA-Change 00:15:11.393 Read (02h): Supported 00:15:11.393 Compare (05h): Supported 00:15:11.393 Write Zeroes (08h): Supported LBA-Change 00:15:11.393 Dataset Management (09h): Supported LBA-Change 00:15:11.393 Unknown (0Ch): Supported 00:15:11.393 Unknown (12h): Supported 00:15:11.393 Copy (19h): Supported LBA-Change 00:15:11.393 Unknown (1Dh): Supported LBA-Change 00:15:11.393 00:15:11.393 Error Log 00:15:11.393 ========= 00:15:11.393 00:15:11.393 Arbitration 00:15:11.393 =========== 00:15:11.393 Arbitration Burst: no limit 00:15:11.393 00:15:11.393 Power Management 00:15:11.393 ================ 00:15:11.393 Number of Power States: 1 00:15:11.393 Current Power State: Power State #0 00:15:11.393 Power State #0: 00:15:11.393 Max Power: 25.00 W 00:15:11.393 Non-Operational State: Operational 00:15:11.393 Entry Latency: 16 microseconds 00:15:11.393 Exit Latency: 4 microseconds 00:15:11.393 Relative Read Throughput: 0 00:15:11.393 Relative Read Latency: 0 00:15:11.393 Relative Write Throughput: 0 00:15:11.393 Relative Write Latency: 0 00:15:11.393 Idle Power: Not Reported 00:15:11.393 Active Power: Not Reported 00:15:11.393 Non-Operational Permissive Mode: Not Supported 00:15:11.393 00:15:11.393 Health Information 00:15:11.393 ================== 00:15:11.393 Critical Warnings: 00:15:11.393 Available Spare Space: OK 00:15:11.393 Temperature: [2024-12-06 13:09:17.628777] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64704 terminated unexpected 00:15:11.393 OK 00:15:11.393 Device Reliability: OK 00:15:11.393 Read Only: No 00:15:11.393 Volatile Memory Backup: OK 00:15:11.393 Current Temperature: 323 Kelvin (50 Celsius) 00:15:11.393 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:11.393 Available Spare: 0% 00:15:11.393 Available Spare Threshold: 0% 00:15:11.393 Life Percentage Used: 0% 00:15:11.393 Data Units Read: 963 00:15:11.393 Data Units Written: 830 00:15:11.393 Host Read Commands: 47786 00:15:11.393 Host Write Commands: 46572 00:15:11.393 Controller Busy Time: 0 minutes 00:15:11.393 Power Cycles: 0 00:15:11.393 Power On Hours: 0 hours 00:15:11.393 Unsafe Shutdowns: 0 00:15:11.393 Unrecoverable Media Errors: 0 00:15:11.393 Lifetime Error Log Entries: 0 00:15:11.393 Warning Temperature Time: 0 minutes 00:15:11.393 Critical Temperature Time: 0 minutes 00:15:11.393 00:15:11.393 Number of Queues 00:15:11.393 ================ 00:15:11.393 Number of I/O Submission Queues: 64 00:15:11.393 Number of I/O Completion Queues: 64 00:15:11.393 00:15:11.393 ZNS Specific Controller Data 00:15:11.393 ============================ 00:15:11.393 Zone Append Size Limit: 0 00:15:11.393 00:15:11.393 00:15:11.393 Active Namespaces 00:15:11.393 ================= 00:15:11.393 Namespace ID:1 00:15:11.393 Error Recovery Timeout: Unlimited 00:15:11.393 Command Set Identifier: NVM (00h) 00:15:11.393 Deallocate: Supported 00:15:11.393 Deallocated/Unwritten Error: Supported 00:15:11.393 Deallocated Read Value: All 0x00 00:15:11.393 Deallocate in Write Zeroes: Not Supported 00:15:11.393 Deallocated Guard Field: 0xFFFF 00:15:11.393 Flush: Supported 00:15:11.393 Reservation: Not Supported 00:15:11.393 Namespace Sharing Capabilities: Private 00:15:11.393 Size (in LBAs): 1310720 (5GiB) 00:15:11.393 Capacity (in LBAs): 1310720 (5GiB) 00:15:11.393 Utilization (in LBAs): 1310720 (5GiB) 00:15:11.393 Thin Provisioning: Not Supported 00:15:11.393 Per-NS Atomic Units: No 00:15:11.393 Maximum Single Source Range Length: 128 00:15:11.393 Maximum Copy Length: 128 00:15:11.393 Maximum Source Range Count: 128 00:15:11.393 NGUID/EUI64 Never Reused: No 00:15:11.393 Namespace Write Protected: No 00:15:11.393 Number of LBA Formats: 8 00:15:11.393 Current LBA Format: LBA Format #04 00:15:11.393 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.393 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:11.393 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:11.393 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:11.393 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:11.393 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:11.393 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:11.393 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:11.393 00:15:11.393 NVM Specific Namespace Data 00:15:11.393 =========================== 00:15:11.393 Logical Block Storage Tag Mask: 0 00:15:11.393 Protection Information Capabilities: 00:15:11.393 16b Guard Protection Information Storage Tag Support: No 00:15:11.393 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:11.393 Storage Tag Check Read Support: No 00:15:11.393 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.393 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.393 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.393 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.393 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.393 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.393 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.393 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.393 ===================================================== 00:15:11.393 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:11.393 ===================================================== 00:15:11.393 Controller Capabilities/Features 00:15:11.393 ================================ 00:15:11.393 Vendor ID: 1b36 00:15:11.393 Subsystem Vendor ID: 1af4 00:15:11.393 Serial Number: 12343 00:15:11.393 Model Number: QEMU NVMe Ctrl 00:15:11.393 Firmware Version: 8.0.0 00:15:11.393 Recommended Arb Burst: 6 00:15:11.393 IEEE OUI Identifier: 00 54 52 00:15:11.393 Multi-path I/O 00:15:11.393 May have multiple subsystem ports: No 00:15:11.393 May have multiple controllers: Yes 00:15:11.393 Associated with SR-IOV VF: No 00:15:11.393 Max Data Transfer Size: 524288 00:15:11.393 Max Number of Namespaces: 256 00:15:11.393 Max Number of I/O Queues: 64 00:15:11.393 NVMe Specification Version (VS): 1.4 00:15:11.393 NVMe Specification Version (Identify): 1.4 00:15:11.393 Maximum Queue Entries: 2048 00:15:11.393 Contiguous Queues Required: Yes 00:15:11.393 Arbitration Mechanisms Supported 00:15:11.393 Weighted Round Robin: Not Supported 00:15:11.393 Vendor Specific: Not Supported 00:15:11.393 Reset Timeout: 7500 ms 00:15:11.393 Doorbell Stride: 4 bytes 00:15:11.393 NVM Subsystem Reset: Not Supported 00:15:11.393 Command Sets Supported 00:15:11.393 NVM Command Set: Supported 00:15:11.393 Boot Partition: Not Supported 00:15:11.393 Memory Page Size Minimum: 4096 bytes 00:15:11.393 Memory Page Size Maximum: 65536 bytes 00:15:11.393 Persistent Memory Region: Not Supported 00:15:11.393 Optional Asynchronous Events Supported 00:15:11.393 Namespace Attribute Notices: Supported 00:15:11.393 Firmware Activation Notices: Not Supported 00:15:11.393 ANA Change Notices: Not Supported 00:15:11.393 PLE Aggregate Log Change Notices: Not Supported 00:15:11.393 LBA Status Info Alert Notices: Not Supported 00:15:11.393 EGE Aggregate Log Change Notices: Not Supported 00:15:11.393 Normal NVM Subsystem Shutdown event: Not Supported 00:15:11.393 Zone Descriptor Change Notices: Not Supported 00:15:11.393 Discovery Log Change Notices: Not Supported 00:15:11.393 Controller Attributes 00:15:11.393 128-bit Host Identifier: Not Supported 00:15:11.393 Non-Operational Permissive Mode: Not Supported 00:15:11.393 NVM Sets: Not Supported 00:15:11.393 Read Recovery Levels: Not Supported 00:15:11.393 Endurance Groups: Supported 00:15:11.393 Predictable Latency Mode: Not Supported 00:15:11.393 Traffic Based Keep ALive: Not Supported 00:15:11.393 Namespace Granularity: Not Supported 00:15:11.393 SQ Associations: Not Supported 00:15:11.393 UUID List: Not Supported 00:15:11.393 Multi-Domain Subsystem: Not Supported 00:15:11.393 Fixed Capacity Management: Not Supported 00:15:11.393 Variable Capacity Management: Not Supported 00:15:11.393 Delete Endurance Group: Not Supported 00:15:11.393 Delete NVM Set: Not Supported 00:15:11.393 Extended LBA Formats Supported: Supported 00:15:11.393 Flexible Data Placement Supported: Supported 00:15:11.393 00:15:11.393 Controller Memory Buffer Support 00:15:11.393 ================================ 00:15:11.393 Supported: No 00:15:11.393 00:15:11.394 Persistent Memory Region Support 00:15:11.394 ================================ 00:15:11.394 Supported: No 00:15:11.394 00:15:11.394 Admin Command Set Attributes 00:15:11.394 ============================ 00:15:11.394 Security Send/Receive: Not Supported 00:15:11.394 Format NVM: Supported 00:15:11.394 Firmware Activate/Download: Not Supported 00:15:11.394 Namespace Management: Supported 00:15:11.394 Device Self-Test: Not Supported 00:15:11.394 Directives: Supported 00:15:11.394 NVMe-MI: Not Supported 00:15:11.394 Virtualization Management: Not Supported 00:15:11.394 Doorbell Buffer Config: Supported 00:15:11.394 Get LBA Status Capability: Not Supported 00:15:11.394 Command & Feature Lockdown Capability: Not Supported 00:15:11.394 Abort Command Limit: 4 00:15:11.394 Async Event Request Limit: 4 00:15:11.394 Number of Firmware Slots: N/A 00:15:11.394 Firmware Slot 1 Read-Only: N/A 00:15:11.394 Firmware Activation Without Reset: N/A 00:15:11.394 Multiple Update Detection Support: N/A 00:15:11.394 Firmware Update Granularity: No Information Provided 00:15:11.394 Per-Namespace SMART Log: Yes 00:15:11.394 Asymmetric Namespace Access Log Page: Not Supported 00:15:11.394 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:15:11.394 Command Effects Log Page: Supported 00:15:11.394 Get Log Page Extended Data: Supported 00:15:11.394 Telemetry Log Pages: Not Supported 00:15:11.394 Persistent Event Log Pages: Not Supported 00:15:11.394 Supported Log Pages Log Page: May Support 00:15:11.394 Commands Supported & Effects Log Page: Not Supported 00:15:11.394 Feature Identifiers & Effects Log Page:May Support 00:15:11.394 NVMe-MI Commands & Effects Log Page: May Support 00:15:11.394 Data Area 4 for Telemetry Log: Not Supported 00:15:11.394 Error Log Page Entries Supported: 1 00:15:11.394 Keep Alive: Not Supported 00:15:11.394 00:15:11.394 NVM Command Set Attributes 00:15:11.394 ========================== 00:15:11.394 Submission Queue Entry Size 00:15:11.394 Max: 64 00:15:11.394 Min: 64 00:15:11.394 Completion Queue Entry Size 00:15:11.394 Max: 16 00:15:11.394 Min: 16 00:15:11.394 Number of Namespaces: 256 00:15:11.394 Compare Command: Supported 00:15:11.394 Write Uncorrectable Command: Not Supported 00:15:11.394 Dataset Management Command: Supported 00:15:11.394 Write Zeroes Command: Supported 00:15:11.394 Set Features Save Field: Supported 00:15:11.394 Reservations: Not Supported 00:15:11.394 Timestamp: Supported 00:15:11.394 Copy: Supported 00:15:11.394 Volatile Write Cache: Present 00:15:11.394 Atomic Write Unit (Normal): 1 00:15:11.394 Atomic Write Unit (PFail): 1 00:15:11.394 Atomic Compare & Write Unit: 1 00:15:11.394 Fused Compare & Write: Not Supported 00:15:11.394 Scatter-Gather List 00:15:11.394 SGL Command Set: Supported 00:15:11.394 SGL Keyed: Not Supported 00:15:11.394 SGL Bit Bucket Descriptor: Not Supported 00:15:11.394 SGL Metadata Pointer: Not Supported 00:15:11.394 Oversized SGL: Not Supported 00:15:11.394 SGL Metadata Address: Not Supported 00:15:11.394 SGL Offset: Not Supported 00:15:11.394 Transport SGL Data Block: Not Supported 00:15:11.394 Replay Protected Memory Block: Not Supported 00:15:11.394 00:15:11.394 Firmware Slot Information 00:15:11.394 ========================= 00:15:11.394 Active slot: 1 00:15:11.394 Slot 1 Firmware Revision: 1.0 00:15:11.394 00:15:11.394 00:15:11.394 Commands Supported and Effects 00:15:11.394 ============================== 00:15:11.394 Admin Commands 00:15:11.394 -------------- 00:15:11.394 Delete I/O Submission Queue (00h): Supported 00:15:11.394 Create I/O Submission Queue (01h): Supported 00:15:11.394 Get Log Page (02h): Supported 00:15:11.394 Delete I/O Completion Queue (04h): Supported 00:15:11.394 Create I/O Completion Queue (05h): Supported 00:15:11.394 Identify (06h): Supported 00:15:11.394 Abort (08h): Supported 00:15:11.394 Set Features (09h): Supported 00:15:11.394 Get Features (0Ah): Supported 00:15:11.394 Asynchronous Event Request (0Ch): Supported 00:15:11.394 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:11.394 Directive Send (19h): Supported 00:15:11.394 Directive Receive (1Ah): Supported 00:15:11.394 Virtualization Management (1Ch): Supported 00:15:11.394 Doorbell Buffer Config (7Ch): Supported 00:15:11.394 Format NVM (80h): Supported LBA-Change 00:15:11.394 I/O Commands 00:15:11.394 ------------ 00:15:11.394 Flush (00h): Supported LBA-Change 00:15:11.394 Write (01h): Supported LBA-Change 00:15:11.394 Read (02h): Supported 00:15:11.394 Compare (05h): Supported 00:15:11.394 Write Zeroes (08h): Supported LBA-Change 00:15:11.394 Dataset Management (09h): Supported LBA-Change 00:15:11.394 Unknown (0Ch): Supported 00:15:11.394 Unknown (12h): Supported 00:15:11.394 Copy (19h): Supported LBA-Change 00:15:11.394 Unknown (1Dh): Supported LBA-Change 00:15:11.394 00:15:11.394 Error Log 00:15:11.394 ========= 00:15:11.394 00:15:11.394 Arbitration 00:15:11.394 =========== 00:15:11.394 Arbitration Burst: no limit 00:15:11.394 00:15:11.394 Power Management 00:15:11.394 ================ 00:15:11.394 Number of Power States: 1 00:15:11.394 Current Power State: Power State #0 00:15:11.394 Power State #0: 00:15:11.394 Max Power: 25.00 W 00:15:11.394 Non-Operational State: Operational 00:15:11.394 Entry Latency: 16 microseconds 00:15:11.394 Exit Latency: 4 microseconds 00:15:11.394 Relative Read Throughput: 0 00:15:11.394 Relative Read Latency: 0 00:15:11.394 Relative Write Throughput: 0 00:15:11.394 Relative Write Latency: 0 00:15:11.394 Idle Power: Not Reported 00:15:11.394 Active Power: Not Reported 00:15:11.394 Non-Operational Permissive Mode: Not Supported 00:15:11.394 00:15:11.394 Health Information 00:15:11.394 ================== 00:15:11.394 Critical Warnings: 00:15:11.394 Available Spare Space: OK 00:15:11.394 Temperature: OK 00:15:11.394 Device Reliability: OK 00:15:11.394 Read Only: No 00:15:11.394 Volatile Memory Backup: OK 00:15:11.394 Current Temperature: 323 Kelvin (50 Celsius) 00:15:11.394 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:11.394 Available Spare: 0% 00:15:11.394 Available Spare Threshold: 0% 00:15:11.394 Life Percentage Used: 0% 00:15:11.394 Data Units Read: 756 00:15:11.394 Data Units Written: 685 00:15:11.394 Host Read Commands: 33707 00:15:11.394 Host Write Commands: 33130 00:15:11.394 Controller Busy Time: 0 minutes 00:15:11.394 Power Cycles: 0 00:15:11.394 Power On Hours: 0 hours 00:15:11.394 Unsafe Shutdowns: 0 00:15:11.394 Unrecoverable Media Errors: 0 00:15:11.394 Lifetime Error Log Entries: 0 00:15:11.394 Warning Temperature Time: 0 minutes 00:15:11.394 Critical Temperature Time: 0 minutes 00:15:11.394 00:15:11.394 Number of Queues 00:15:11.394 ================ 00:15:11.394 Number of I/O Submission Queues: 64 00:15:11.394 Number of I/O Completion Queues: 64 00:15:11.394 00:15:11.394 ZNS Specific Controller Data 00:15:11.394 ============================ 00:15:11.394 Zone Append Size Limit: 0 00:15:11.394 00:15:11.394 00:15:11.394 Active Namespaces 00:15:11.394 ================= 00:15:11.394 Namespace ID:1 00:15:11.394 Error Recovery Timeout: Unlimited 00:15:11.394 Command Set Identifier: NVM (00h) 00:15:11.394 Deallocate: Supported 00:15:11.394 Deallocated/Unwritten Error: Supported 00:15:11.394 Deallocated Read Value: All 0x00 00:15:11.394 Deallocate in Write Zeroes: Not Supported 00:15:11.394 Deallocated Guard Field: 0xFFFF 00:15:11.394 Flush: Supported 00:15:11.394 Reservation: Not Supported 00:15:11.394 Namespace Sharing Capabilities: Multiple Controllers 00:15:11.394 Size (in LBAs): 262144 (1GiB) 00:15:11.394 Capacity (in LBAs): 262144 (1GiB) 00:15:11.394 Utilization (in LBAs): 262144 (1GiB) 00:15:11.394 Thin Provisioning: Not Supported 00:15:11.394 Per-NS Atomic Units: No 00:15:11.394 Maximum Single Source Range Length: 128 00:15:11.394 Maximum Copy Length: 128 00:15:11.394 Maximum Source Range Count: 128 00:15:11.394 NGUID/EUI64 Never Reused: No 00:15:11.394 Namespace Write Protected: No 00:15:11.394 Endurance group ID: 1 00:15:11.394 Number of LBA Formats: 8 00:15:11.394 Current LBA Format: LBA Format #04 00:15:11.394 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.394 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:11.394 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:11.394 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:11.394 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:11.394 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:11.394 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:11.394 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:11.394 00:15:11.394 Get Feature FDP: 00:15:11.394 ================ 00:15:11.394 Enabled: Yes 00:15:11.394 FDP configuration index: 0 00:15:11.394 00:15:11.395 FDP configurations log page 00:15:11.395 =========================== 00:15:11.395 Number of FDP configurations: 1 00:15:11.395 Version: 0 00:15:11.395 Size: 112 00:15:11.395 FDP Configuration Descriptor: 0 00:15:11.395 Descriptor Size: 96 00:15:11.395 Reclaim Group Identifier format: 2 00:15:11.395 FDP Volatile Write Cache: Not Present 00:15:11.395 FDP Configuration: Valid 00:15:11.395 Vendor Specific Size: 0 00:15:11.395 Number of Reclaim Groups: 2 00:15:11.395 Number of Recalim Unit Handles: 8 00:15:11.395 Max Placement Identifiers: 128 00:15:11.395 Number of Namespaces Suppprted: 256 00:15:11.395 Reclaim unit Nominal Size: 6000000 bytes 00:15:11.395 Estimated Reclaim Unit Time Limit: Not Reported 00:15:11.395 RUH Desc #000: RUH Type: Initially Isolated 00:15:11.395 RUH Desc #001: RUH Type: Initially Isolated 00:15:11.395 RUH Desc #002: RUH Type: Initially Isolated 00:15:11.395 RUH Desc #003: RUH Type: Initially Isolated 00:15:11.395 RUH Desc #004: RUH Type: Initially Isolated 00:15:11.395 RUH Desc #005: RUH Type: Initially Isolated 00:15:11.395 RUH Desc #006: RUH Type: Initially Isolated 00:15:11.395 RUH Desc #007: RUH Type: Initially Isolated 00:15:11.395 00:15:11.395 FDP reclaim unit handle usage log page 00:15:11.395 ====================================== 00:15:11.395 Number of Reclaim Unit Handles: 8 00:15:11.395 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:11.395 RUH Usage Desc #001: RUH Attributes: Unused 00:15:11.395 RUH Usage Desc #002: RUH Attributes: Unused 00:15:11.395 RUH Usage Desc #003: RUH Attributes: Unused 00:15:11.395 RUH Usage Desc #004: RUH Attributes: Unused 00:15:11.395 RUH Usage Desc #005: RUH Attributes: Unused 00:15:11.395 RUH Usage Desc #006: RUH Attributes: Unused 00:15:11.395 RUH Usage Desc #007: RUH Attributes: Unused 00:15:11.395 00:15:11.395 FDP statistics log page 00:15:11.395 ======================= 00:15:11.395 Host bytes with metadata written: 426942464 00:15:11.395 Media[2024-12-06 13:09:17.630494] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64704 terminated unexpected 00:15:11.395 bytes with metadata written: 426987520 00:15:11.395 Media bytes erased: 0 00:15:11.395 00:15:11.395 FDP events log page 00:15:11.395 =================== 00:15:11.395 Number of FDP events: 0 00:15:11.395 00:15:11.395 NVM Specific Namespace Data 00:15:11.395 =========================== 00:15:11.395 Logical Block Storage Tag Mask: 0 00:15:11.395 Protection Information Capabilities: 00:15:11.395 16b Guard Protection Information Storage Tag Support: No 00:15:11.395 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:11.395 Storage Tag Check Read Support: No 00:15:11.395 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.395 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.395 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.395 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.395 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.395 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.395 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.395 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.395 ===================================================== 00:15:11.395 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:11.395 ===================================================== 00:15:11.395 Controller Capabilities/Features 00:15:11.395 ================================ 00:15:11.395 Vendor ID: 1b36 00:15:11.395 Subsystem Vendor ID: 1af4 00:15:11.395 Serial Number: 12342 00:15:11.395 Model Number: QEMU NVMe Ctrl 00:15:11.395 Firmware Version: 8.0.0 00:15:11.395 Recommended Arb Burst: 6 00:15:11.395 IEEE OUI Identifier: 00 54 52 00:15:11.395 Multi-path I/O 00:15:11.395 May have multiple subsystem ports: No 00:15:11.395 May have multiple controllers: No 00:15:11.395 Associated with SR-IOV VF: No 00:15:11.395 Max Data Transfer Size: 524288 00:15:11.395 Max Number of Namespaces: 256 00:15:11.395 Max Number of I/O Queues: 64 00:15:11.395 NVMe Specification Version (VS): 1.4 00:15:11.395 NVMe Specification Version (Identify): 1.4 00:15:11.395 Maximum Queue Entries: 2048 00:15:11.395 Contiguous Queues Required: Yes 00:15:11.395 Arbitration Mechanisms Supported 00:15:11.395 Weighted Round Robin: Not Supported 00:15:11.395 Vendor Specific: Not Supported 00:15:11.395 Reset Timeout: 7500 ms 00:15:11.395 Doorbell Stride: 4 bytes 00:15:11.395 NVM Subsystem Reset: Not Supported 00:15:11.395 Command Sets Supported 00:15:11.395 NVM Command Set: Supported 00:15:11.395 Boot Partition: Not Supported 00:15:11.395 Memory Page Size Minimum: 4096 bytes 00:15:11.395 Memory Page Size Maximum: 65536 bytes 00:15:11.395 Persistent Memory Region: Not Supported 00:15:11.395 Optional Asynchronous Events Supported 00:15:11.395 Namespace Attribute Notices: Supported 00:15:11.395 Firmware Activation Notices: Not Supported 00:15:11.395 ANA Change Notices: Not Supported 00:15:11.395 PLE Aggregate Log Change Notices: Not Supported 00:15:11.395 LBA Status Info Alert Notices: Not Supported 00:15:11.395 EGE Aggregate Log Change Notices: Not Supported 00:15:11.395 Normal NVM Subsystem Shutdown event: Not Supported 00:15:11.395 Zone Descriptor Change Notices: Not Supported 00:15:11.395 Discovery Log Change Notices: Not Supported 00:15:11.395 Controller Attributes 00:15:11.395 128-bit Host Identifier: Not Supported 00:15:11.395 Non-Operational Permissive Mode: Not Supported 00:15:11.395 NVM Sets: Not Supported 00:15:11.395 Read Recovery Levels: Not Supported 00:15:11.395 Endurance Groups: Not Supported 00:15:11.395 Predictable Latency Mode: Not Supported 00:15:11.396 Traffic Based Keep ALive: Not Supported 00:15:11.396 Namespace Granularity: Not Supported 00:15:11.396 SQ Associations: Not Supported 00:15:11.396 UUID List: Not Supported 00:15:11.396 Multi-Domain Subsystem: Not Supported 00:15:11.396 Fixed Capacity Management: Not Supported 00:15:11.396 Variable Capacity Management: Not Supported 00:15:11.396 Delete Endurance Group: Not Supported 00:15:11.396 Delete NVM Set: Not Supported 00:15:11.396 Extended LBA Formats Supported: Supported 00:15:11.396 Flexible Data Placement Supported: Not Supported 00:15:11.396 00:15:11.396 Controller Memory Buffer Support 00:15:11.396 ================================ 00:15:11.396 Supported: No 00:15:11.396 00:15:11.396 Persistent Memory Region Support 00:15:11.396 ================================ 00:15:11.396 Supported: No 00:15:11.396 00:15:11.396 Admin Command Set Attributes 00:15:11.396 ============================ 00:15:11.396 Security Send/Receive: Not Supported 00:15:11.396 Format NVM: Supported 00:15:11.396 Firmware Activate/Download: Not Supported 00:15:11.396 Namespace Management: Supported 00:15:11.396 Device Self-Test: Not Supported 00:15:11.396 Directives: Supported 00:15:11.396 NVMe-MI: Not Supported 00:15:11.396 Virtualization Management: Not Supported 00:15:11.396 Doorbell Buffer Config: Supported 00:15:11.396 Get LBA Status Capability: Not Supported 00:15:11.396 Command & Feature Lockdown Capability: Not Supported 00:15:11.396 Abort Command Limit: 4 00:15:11.396 Async Event Request Limit: 4 00:15:11.396 Number of Firmware Slots: N/A 00:15:11.396 Firmware Slot 1 Read-Only: N/A 00:15:11.396 Firmware Activation Without Reset: N/A 00:15:11.396 Multiple Update Detection Support: N/A 00:15:11.396 Firmware Update Granularity: No Information Provided 00:15:11.396 Per-Namespace SMART Log: Yes 00:15:11.396 Asymmetric Namespace Access Log Page: Not Supported 00:15:11.396 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:15:11.396 Command Effects Log Page: Supported 00:15:11.396 Get Log Page Extended Data: Supported 00:15:11.396 Telemetry Log Pages: Not Supported 00:15:11.396 Persistent Event Log Pages: Not Supported 00:15:11.396 Supported Log Pages Log Page: May Support 00:15:11.396 Commands Supported & Effects Log Page: Not Supported 00:15:11.396 Feature Identifiers & Effects Log Page:May Support 00:15:11.396 NVMe-MI Commands & Effects Log Page: May Support 00:15:11.396 Data Area 4 for Telemetry Log: Not Supported 00:15:11.396 Error Log Page Entries Supported: 1 00:15:11.396 Keep Alive: Not Supported 00:15:11.396 00:15:11.396 NVM Command Set Attributes 00:15:11.396 ========================== 00:15:11.396 Submission Queue Entry Size 00:15:11.396 Max: 64 00:15:11.396 Min: 64 00:15:11.396 Completion Queue Entry Size 00:15:11.396 Max: 16 00:15:11.396 Min: 16 00:15:11.396 Number of Namespaces: 256 00:15:11.396 Compare Command: Supported 00:15:11.396 Write Uncorrectable Command: Not Supported 00:15:11.396 Dataset Management Command: Supported 00:15:11.396 Write Zeroes Command: Supported 00:15:11.396 Set Features Save Field: Supported 00:15:11.396 Reservations: Not Supported 00:15:11.396 Timestamp: Supported 00:15:11.396 Copy: Supported 00:15:11.396 Volatile Write Cache: Present 00:15:11.396 Atomic Write Unit (Normal): 1 00:15:11.396 Atomic Write Unit (PFail): 1 00:15:11.396 Atomic Compare & Write Unit: 1 00:15:11.396 Fused Compare & Write: Not Supported 00:15:11.396 Scatter-Gather List 00:15:11.396 SGL Command Set: Supported 00:15:11.396 SGL Keyed: Not Supported 00:15:11.396 SGL Bit Bucket Descriptor: Not Supported 00:15:11.396 SGL Metadata Pointer: Not Supported 00:15:11.396 Oversized SGL: Not Supported 00:15:11.396 SGL Metadata Address: Not Supported 00:15:11.396 SGL Offset: Not Supported 00:15:11.396 Transport SGL Data Block: Not Supported 00:15:11.396 Replay Protected Memory Block: Not Supported 00:15:11.396 00:15:11.396 Firmware Slot Information 00:15:11.396 ========================= 00:15:11.396 Active slot: 1 00:15:11.396 Slot 1 Firmware Revision: 1.0 00:15:11.396 00:15:11.396 00:15:11.396 Commands Supported and Effects 00:15:11.396 ============================== 00:15:11.396 Admin Commands 00:15:11.396 -------------- 00:15:11.396 Delete I/O Submission Queue (00h): Supported 00:15:11.396 Create I/O Submission Queue (01h): Supported 00:15:11.396 Get Log Page (02h): Supported 00:15:11.396 Delete I/O Completion Queue (04h): Supported 00:15:11.396 Create I/O Completion Queue (05h): Supported 00:15:11.396 Identify (06h): Supported 00:15:11.396 Abort (08h): Supported 00:15:11.396 Set Features (09h): Supported 00:15:11.396 Get Features (0Ah): Supported 00:15:11.396 Asynchronous Event Request (0Ch): Supported 00:15:11.396 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:11.396 Directive Send (19h): Supported 00:15:11.396 Directive Receive (1Ah): Supported 00:15:11.396 Virtualization Management (1Ch): Supported 00:15:11.396 Doorbell Buffer Config (7Ch): Supported 00:15:11.396 Format NVM (80h): Supported LBA-Change 00:15:11.396 I/O Commands 00:15:11.396 ------------ 00:15:11.396 Flush (00h): Supported LBA-Change 00:15:11.396 Write (01h): Supported LBA-Change 00:15:11.396 Read (02h): Supported 00:15:11.396 Compare (05h): Supported 00:15:11.396 Write Zeroes (08h): Supported LBA-Change 00:15:11.396 Dataset Management (09h): Supported LBA-Change 00:15:11.396 Unknown (0Ch): Supported 00:15:11.396 Unknown (12h): Supported 00:15:11.396 Copy (19h): Supported LBA-Change 00:15:11.396 Unknown (1Dh): Supported LBA-Change 00:15:11.396 00:15:11.396 Error Log 00:15:11.396 ========= 00:15:11.396 00:15:11.396 Arbitration 00:15:11.396 =========== 00:15:11.396 Arbitration Burst: no limit 00:15:11.396 00:15:11.396 Power Management 00:15:11.396 ================ 00:15:11.396 Number of Power States: 1 00:15:11.396 Current Power State: Power State #0 00:15:11.396 Power State #0: 00:15:11.396 Max Power: 25.00 W 00:15:11.396 Non-Operational State: Operational 00:15:11.396 Entry Latency: 16 microseconds 00:15:11.396 Exit Latency: 4 microseconds 00:15:11.396 Relative Read Throughput: 0 00:15:11.396 Relative Read Latency: 0 00:15:11.396 Relative Write Throughput: 0 00:15:11.396 Relative Write Latency: 0 00:15:11.396 Idle Power: Not Reported 00:15:11.396 Active Power: Not Reported 00:15:11.396 Non-Operational Permissive Mode: Not Supported 00:15:11.396 00:15:11.396 Health Information 00:15:11.396 ================== 00:15:11.396 Critical Warnings: 00:15:11.396 Available Spare Space: OK 00:15:11.396 Temperature: OK 00:15:11.396 Device Reliability: OK 00:15:11.396 Read Only: No 00:15:11.396 Volatile Memory Backup: OK 00:15:11.396 Current Temperature: 323 Kelvin (50 Celsius) 00:15:11.396 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:11.396 Available Spare: 0% 00:15:11.396 Available Spare Threshold: 0% 00:15:11.396 Life Percentage Used: 0% 00:15:11.396 Data Units Read: 2066 00:15:11.396 Data Units Written: 1853 00:15:11.396 Host Read Commands: 99360 00:15:11.396 Host Write Commands: 97629 00:15:11.396 Controller Busy Time: 0 minutes 00:15:11.396 Power Cycles: 0 00:15:11.396 Power On Hours: 0 hours 00:15:11.396 Unsafe Shutdowns: 0 00:15:11.396 Unrecoverable Media Errors: 0 00:15:11.396 Lifetime Error Log Entries: 0 00:15:11.396 Warning Temperature Time: 0 minutes 00:15:11.396 Critical Temperature Time: 0 minutes 00:15:11.396 00:15:11.396 Number of Queues 00:15:11.396 ================ 00:15:11.396 Number of I/O Submission Queues: 64 00:15:11.396 Number of I/O Completion Queues: 64 00:15:11.396 00:15:11.396 ZNS Specific Controller Data 00:15:11.396 ============================ 00:15:11.396 Zone Append Size Limit: 0 00:15:11.396 00:15:11.396 00:15:11.396 Active Namespaces 00:15:11.396 ================= 00:15:11.396 Namespace ID:1 00:15:11.396 Error Recovery Timeout: Unlimited 00:15:11.396 Command Set Identifier: NVM (00h) 00:15:11.396 Deallocate: Supported 00:15:11.396 Deallocated/Unwritten Error: Supported 00:15:11.396 Deallocated Read Value: All 0x00 00:15:11.396 Deallocate in Write Zeroes: Not Supported 00:15:11.396 Deallocated Guard Field: 0xFFFF 00:15:11.396 Flush: Supported 00:15:11.396 Reservation: Not Supported 00:15:11.396 Namespace Sharing Capabilities: Private 00:15:11.397 Size (in LBAs): 1048576 (4GiB) 00:15:11.397 Capacity (in LBAs): 1048576 (4GiB) 00:15:11.397 Utilization (in LBAs): 1048576 (4GiB) 00:15:11.397 Thin Provisioning: Not Supported 00:15:11.397 Per-NS Atomic Units: No 00:15:11.397 Maximum Single Source Range Length: 128 00:15:11.397 Maximum Copy Length: 128 00:15:11.397 Maximum Source Range Count: 128 00:15:11.397 NGUID/EUI64 Never Reused: No 00:15:11.397 Namespace Write Protected: No 00:15:11.397 Number of LBA Formats: 8 00:15:11.397 Current LBA Format: LBA Format #04 00:15:11.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.397 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:11.397 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:11.397 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:11.397 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:11.397 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:11.397 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:11.397 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:11.397 00:15:11.397 NVM Specific Namespace Data 00:15:11.397 =========================== 00:15:11.397 Logical Block Storage Tag Mask: 0 00:15:11.397 Protection Information Capabilities: 00:15:11.397 16b Guard Protection Information Storage Tag Support: No 00:15:11.397 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:11.397 Storage Tag Check Read Support: No 00:15:11.397 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Namespace ID:2 00:15:11.397 Error Recovery Timeout: Unlimited 00:15:11.397 Command Set Identifier: NVM (00h) 00:15:11.397 Deallocate: Supported 00:15:11.397 Deallocated/Unwritten Error: Supported 00:15:11.397 Deallocated Read Value: All 0x00 00:15:11.397 Deallocate in Write Zeroes: Not Supported 00:15:11.397 Deallocated Guard Field: 0xFFFF 00:15:11.397 Flush: Supported 00:15:11.397 Reservation: Not Supported 00:15:11.397 Namespace Sharing Capabilities: Private 00:15:11.397 Size (in LBAs): 1048576 (4GiB) 00:15:11.397 Capacity (in LBAs): 1048576 (4GiB) 00:15:11.397 Utilization (in LBAs): 1048576 (4GiB) 00:15:11.397 Thin Provisioning: Not Supported 00:15:11.397 Per-NS Atomic Units: No 00:15:11.397 Maximum Single Source Range Length: 128 00:15:11.397 Maximum Copy Length: 128 00:15:11.397 Maximum Source Range Count: 128 00:15:11.397 NGUID/EUI64 Never Reused: No 00:15:11.397 Namespace Write Protected: No 00:15:11.397 Number of LBA Formats: 8 00:15:11.397 Current LBA Format: LBA Format #04 00:15:11.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.397 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:11.397 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:11.397 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:11.397 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:11.397 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:11.397 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:11.397 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:11.397 00:15:11.397 NVM Specific Namespace Data 00:15:11.397 =========================== 00:15:11.397 Logical Block Storage Tag Mask: 0 00:15:11.397 Protection Information Capabilities: 00:15:11.397 16b Guard Protection Information Storage Tag Support: No 00:15:11.397 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:11.397 Storage Tag Check Read Support: No 00:15:11.397 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Namespace ID:3 00:15:11.397 Error Recovery Timeout: Unlimited 00:15:11.397 Command Set Identifier: NVM (00h) 00:15:11.397 Deallocate: Supported 00:15:11.397 Deallocated/Unwritten Error: Supported 00:15:11.397 Deallocated Read Value: All 0x00 00:15:11.397 Deallocate in Write Zeroes: Not Supported 00:15:11.397 Deallocated Guard Field: 0xFFFF 00:15:11.397 Flush: Supported 00:15:11.397 Reservation: Not Supported 00:15:11.397 Namespace Sharing Capabilities: Private 00:15:11.397 Size (in LBAs): 1048576 (4GiB) 00:15:11.397 Capacity (in LBAs): 1048576 (4GiB) 00:15:11.397 Utilization (in LBAs): 1048576 (4GiB) 00:15:11.397 Thin Provisioning: Not Supported 00:15:11.397 Per-NS Atomic Units: No 00:15:11.397 Maximum Single Source Range Length: 128 00:15:11.397 Maximum Copy Length: 128 00:15:11.397 Maximum Source Range Count: 128 00:15:11.397 NGUID/EUI64 Never Reused: No 00:15:11.397 Namespace Write Protected: No 00:15:11.397 Number of LBA Formats: 8 00:15:11.397 Current LBA Format: LBA Format #04 00:15:11.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.397 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:11.397 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:11.397 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:11.397 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:11.397 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:11.397 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:11.397 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:11.397 00:15:11.397 NVM Specific Namespace Data 00:15:11.397 =========================== 00:15:11.397 Logical Block Storage Tag Mask: 0 00:15:11.397 Protection Information Capabilities: 00:15:11.397 16b Guard Protection Information Storage Tag Support: No 00:15:11.397 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:11.397 Storage Tag Check Read Support: No 00:15:11.397 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.397 13:09:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:15:11.397 13:09:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:15:11.656 ===================================================== 00:15:11.656 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:11.656 ===================================================== 00:15:11.656 Controller Capabilities/Features 00:15:11.656 ================================ 00:15:11.656 Vendor ID: 1b36 00:15:11.656 Subsystem Vendor ID: 1af4 00:15:11.656 Serial Number: 12340 00:15:11.656 Model Number: QEMU NVMe Ctrl 00:15:11.656 Firmware Version: 8.0.0 00:15:11.657 Recommended Arb Burst: 6 00:15:11.657 IEEE OUI Identifier: 00 54 52 00:15:11.657 Multi-path I/O 00:15:11.657 May have multiple subsystem ports: No 00:15:11.657 May have multiple controllers: No 00:15:11.657 Associated with SR-IOV VF: No 00:15:11.657 Max Data Transfer Size: 524288 00:15:11.657 Max Number of Namespaces: 256 00:15:11.657 Max Number of I/O Queues: 64 00:15:11.657 NVMe Specification Version (VS): 1.4 00:15:11.657 NVMe Specification Version (Identify): 1.4 00:15:11.657 Maximum Queue Entries: 2048 00:15:11.657 Contiguous Queues Required: Yes 00:15:11.657 Arbitration Mechanisms Supported 00:15:11.657 Weighted Round Robin: Not Supported 00:15:11.657 Vendor Specific: Not Supported 00:15:11.657 Reset Timeout: 7500 ms 00:15:11.657 Doorbell Stride: 4 bytes 00:15:11.657 NVM Subsystem Reset: Not Supported 00:15:11.657 Command Sets Supported 00:15:11.657 NVM Command Set: Supported 00:15:11.657 Boot Partition: Not Supported 00:15:11.657 Memory Page Size Minimum: 4096 bytes 00:15:11.657 Memory Page Size Maximum: 65536 bytes 00:15:11.657 Persistent Memory Region: Not Supported 00:15:11.657 Optional Asynchronous Events Supported 00:15:11.657 Namespace Attribute Notices: Supported 00:15:11.657 Firmware Activation Notices: Not Supported 00:15:11.657 ANA Change Notices: Not Supported 00:15:11.657 PLE Aggregate Log Change Notices: Not Supported 00:15:11.657 LBA Status Info Alert Notices: Not Supported 00:15:11.657 EGE Aggregate Log Change Notices: Not Supported 00:15:11.657 Normal NVM Subsystem Shutdown event: Not Supported 00:15:11.657 Zone Descriptor Change Notices: Not Supported 00:15:11.657 Discovery Log Change Notices: Not Supported 00:15:11.657 Controller Attributes 00:15:11.657 128-bit Host Identifier: Not Supported 00:15:11.657 Non-Operational Permissive Mode: Not Supported 00:15:11.657 NVM Sets: Not Supported 00:15:11.657 Read Recovery Levels: Not Supported 00:15:11.657 Endurance Groups: Not Supported 00:15:11.657 Predictable Latency Mode: Not Supported 00:15:11.657 Traffic Based Keep ALive: Not Supported 00:15:11.657 Namespace Granularity: Not Supported 00:15:11.657 SQ Associations: Not Supported 00:15:11.657 UUID List: Not Supported 00:15:11.657 Multi-Domain Subsystem: Not Supported 00:15:11.657 Fixed Capacity Management: Not Supported 00:15:11.657 Variable Capacity Management: Not Supported 00:15:11.657 Delete Endurance Group: Not Supported 00:15:11.657 Delete NVM Set: Not Supported 00:15:11.657 Extended LBA Formats Supported: Supported 00:15:11.657 Flexible Data Placement Supported: Not Supported 00:15:11.657 00:15:11.657 Controller Memory Buffer Support 00:15:11.657 ================================ 00:15:11.657 Supported: No 00:15:11.657 00:15:11.657 Persistent Memory Region Support 00:15:11.657 ================================ 00:15:11.657 Supported: No 00:15:11.657 00:15:11.657 Admin Command Set Attributes 00:15:11.657 ============================ 00:15:11.657 Security Send/Receive: Not Supported 00:15:11.657 Format NVM: Supported 00:15:11.657 Firmware Activate/Download: Not Supported 00:15:11.657 Namespace Management: Supported 00:15:11.657 Device Self-Test: Not Supported 00:15:11.657 Directives: Supported 00:15:11.657 NVMe-MI: Not Supported 00:15:11.657 Virtualization Management: Not Supported 00:15:11.657 Doorbell Buffer Config: Supported 00:15:11.657 Get LBA Status Capability: Not Supported 00:15:11.657 Command & Feature Lockdown Capability: Not Supported 00:15:11.657 Abort Command Limit: 4 00:15:11.657 Async Event Request Limit: 4 00:15:11.657 Number of Firmware Slots: N/A 00:15:11.657 Firmware Slot 1 Read-Only: N/A 00:15:11.657 Firmware Activation Without Reset: N/A 00:15:11.657 Multiple Update Detection Support: N/A 00:15:11.657 Firmware Update Granularity: No Information Provided 00:15:11.657 Per-Namespace SMART Log: Yes 00:15:11.657 Asymmetric Namespace Access Log Page: Not Supported 00:15:11.657 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:15:11.657 Command Effects Log Page: Supported 00:15:11.657 Get Log Page Extended Data: Supported 00:15:11.657 Telemetry Log Pages: Not Supported 00:15:11.657 Persistent Event Log Pages: Not Supported 00:15:11.657 Supported Log Pages Log Page: May Support 00:15:11.657 Commands Supported & Effects Log Page: Not Supported 00:15:11.657 Feature Identifiers & Effects Log Page:May Support 00:15:11.657 NVMe-MI Commands & Effects Log Page: May Support 00:15:11.657 Data Area 4 for Telemetry Log: Not Supported 00:15:11.657 Error Log Page Entries Supported: 1 00:15:11.657 Keep Alive: Not Supported 00:15:11.657 00:15:11.657 NVM Command Set Attributes 00:15:11.657 ========================== 00:15:11.657 Submission Queue Entry Size 00:15:11.657 Max: 64 00:15:11.657 Min: 64 00:15:11.657 Completion Queue Entry Size 00:15:11.657 Max: 16 00:15:11.657 Min: 16 00:15:11.657 Number of Namespaces: 256 00:15:11.657 Compare Command: Supported 00:15:11.657 Write Uncorrectable Command: Not Supported 00:15:11.657 Dataset Management Command: Supported 00:15:11.657 Write Zeroes Command: Supported 00:15:11.657 Set Features Save Field: Supported 00:15:11.657 Reservations: Not Supported 00:15:11.657 Timestamp: Supported 00:15:11.657 Copy: Supported 00:15:11.657 Volatile Write Cache: Present 00:15:11.657 Atomic Write Unit (Normal): 1 00:15:11.657 Atomic Write Unit (PFail): 1 00:15:11.657 Atomic Compare & Write Unit: 1 00:15:11.657 Fused Compare & Write: Not Supported 00:15:11.657 Scatter-Gather List 00:15:11.657 SGL Command Set: Supported 00:15:11.657 SGL Keyed: Not Supported 00:15:11.657 SGL Bit Bucket Descriptor: Not Supported 00:15:11.657 SGL Metadata Pointer: Not Supported 00:15:11.657 Oversized SGL: Not Supported 00:15:11.657 SGL Metadata Address: Not Supported 00:15:11.657 SGL Offset: Not Supported 00:15:11.657 Transport SGL Data Block: Not Supported 00:15:11.657 Replay Protected Memory Block: Not Supported 00:15:11.657 00:15:11.657 Firmware Slot Information 00:15:11.657 ========================= 00:15:11.657 Active slot: 1 00:15:11.657 Slot 1 Firmware Revision: 1.0 00:15:11.657 00:15:11.657 00:15:11.657 Commands Supported and Effects 00:15:11.657 ============================== 00:15:11.657 Admin Commands 00:15:11.657 -------------- 00:15:11.657 Delete I/O Submission Queue (00h): Supported 00:15:11.657 Create I/O Submission Queue (01h): Supported 00:15:11.657 Get Log Page (02h): Supported 00:15:11.657 Delete I/O Completion Queue (04h): Supported 00:15:11.657 Create I/O Completion Queue (05h): Supported 00:15:11.657 Identify (06h): Supported 00:15:11.657 Abort (08h): Supported 00:15:11.657 Set Features (09h): Supported 00:15:11.657 Get Features (0Ah): Supported 00:15:11.657 Asynchronous Event Request (0Ch): Supported 00:15:11.657 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:11.657 Directive Send (19h): Supported 00:15:11.657 Directive Receive (1Ah): Supported 00:15:11.657 Virtualization Management (1Ch): Supported 00:15:11.657 Doorbell Buffer Config (7Ch): Supported 00:15:11.657 Format NVM (80h): Supported LBA-Change 00:15:11.657 I/O Commands 00:15:11.657 ------------ 00:15:11.657 Flush (00h): Supported LBA-Change 00:15:11.657 Write (01h): Supported LBA-Change 00:15:11.657 Read (02h): Supported 00:15:11.657 Compare (05h): Supported 00:15:11.657 Write Zeroes (08h): Supported LBA-Change 00:15:11.657 Dataset Management (09h): Supported LBA-Change 00:15:11.657 Unknown (0Ch): Supported 00:15:11.657 Unknown (12h): Supported 00:15:11.657 Copy (19h): Supported LBA-Change 00:15:11.657 Unknown (1Dh): Supported LBA-Change 00:15:11.657 00:15:11.657 Error Log 00:15:11.657 ========= 00:15:11.657 00:15:11.657 Arbitration 00:15:11.657 =========== 00:15:11.657 Arbitration Burst: no limit 00:15:11.657 00:15:11.657 Power Management 00:15:11.657 ================ 00:15:11.657 Number of Power States: 1 00:15:11.657 Current Power State: Power State #0 00:15:11.657 Power State #0: 00:15:11.657 Max Power: 25.00 W 00:15:11.657 Non-Operational State: Operational 00:15:11.657 Entry Latency: 16 microseconds 00:15:11.657 Exit Latency: 4 microseconds 00:15:11.657 Relative Read Throughput: 0 00:15:11.657 Relative Read Latency: 0 00:15:11.657 Relative Write Throughput: 0 00:15:11.657 Relative Write Latency: 0 00:15:11.657 Idle Power: Not Reported 00:15:11.657 Active Power: Not Reported 00:15:11.657 Non-Operational Permissive Mode: Not Supported 00:15:11.657 00:15:11.657 Health Information 00:15:11.657 ================== 00:15:11.657 Critical Warnings: 00:15:11.657 Available Spare Space: OK 00:15:11.657 Temperature: OK 00:15:11.658 Device Reliability: OK 00:15:11.658 Read Only: No 00:15:11.658 Volatile Memory Backup: OK 00:15:11.658 Current Temperature: 323 Kelvin (50 Celsius) 00:15:11.658 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:11.658 Available Spare: 0% 00:15:11.658 Available Spare Threshold: 0% 00:15:11.658 Life Percentage Used: 0% 00:15:11.658 Data Units Read: 656 00:15:11.658 Data Units Written: 584 00:15:11.658 Host Read Commands: 32744 00:15:11.658 Host Write Commands: 32530 00:15:11.658 Controller Busy Time: 0 minutes 00:15:11.658 Power Cycles: 0 00:15:11.658 Power On Hours: 0 hours 00:15:11.658 Unsafe Shutdowns: 0 00:15:11.658 Unrecoverable Media Errors: 0 00:15:11.658 Lifetime Error Log Entries: 0 00:15:11.658 Warning Temperature Time: 0 minutes 00:15:11.658 Critical Temperature Time: 0 minutes 00:15:11.658 00:15:11.658 Number of Queues 00:15:11.658 ================ 00:15:11.658 Number of I/O Submission Queues: 64 00:15:11.658 Number of I/O Completion Queues: 64 00:15:11.658 00:15:11.658 ZNS Specific Controller Data 00:15:11.658 ============================ 00:15:11.658 Zone Append Size Limit: 0 00:15:11.658 00:15:11.658 00:15:11.658 Active Namespaces 00:15:11.658 ================= 00:15:11.658 Namespace ID:1 00:15:11.658 Error Recovery Timeout: Unlimited 00:15:11.658 Command Set Identifier: NVM (00h) 00:15:11.658 Deallocate: Supported 00:15:11.658 Deallocated/Unwritten Error: Supported 00:15:11.658 Deallocated Read Value: All 0x00 00:15:11.658 Deallocate in Write Zeroes: Not Supported 00:15:11.658 Deallocated Guard Field: 0xFFFF 00:15:11.658 Flush: Supported 00:15:11.658 Reservation: Not Supported 00:15:11.658 Metadata Transferred as: Separate Metadata Buffer 00:15:11.658 Namespace Sharing Capabilities: Private 00:15:11.658 Size (in LBAs): 1548666 (5GiB) 00:15:11.658 Capacity (in LBAs): 1548666 (5GiB) 00:15:11.658 Utilization (in LBAs): 1548666 (5GiB) 00:15:11.658 Thin Provisioning: Not Supported 00:15:11.658 Per-NS Atomic Units: No 00:15:11.658 Maximum Single Source Range Length: 128 00:15:11.658 Maximum Copy Length: 128 00:15:11.658 Maximum Source Range Count: 128 00:15:11.658 NGUID/EUI64 Never Reused: No 00:15:11.658 Namespace Write Protected: No 00:15:11.658 Number of LBA Formats: 8 00:15:11.658 Current LBA Format: LBA Format #07 00:15:11.658 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.658 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:11.658 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:11.658 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:11.658 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:11.658 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:11.658 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:11.658 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:11.658 00:15:11.658 NVM Specific Namespace Data 00:15:11.658 =========================== 00:15:11.658 Logical Block Storage Tag Mask: 0 00:15:11.658 Protection Information Capabilities: 00:15:11.658 16b Guard Protection Information Storage Tag Support: No 00:15:11.658 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:11.658 Storage Tag Check Read Support: No 00:15:11.658 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.658 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.658 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.658 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.658 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.658 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.658 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.658 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.658 13:09:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:15:11.658 13:09:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:15:11.917 ===================================================== 00:15:11.917 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:11.917 ===================================================== 00:15:11.917 Controller Capabilities/Features 00:15:11.917 ================================ 00:15:11.917 Vendor ID: 1b36 00:15:11.917 Subsystem Vendor ID: 1af4 00:15:11.917 Serial Number: 12341 00:15:11.917 Model Number: QEMU NVMe Ctrl 00:15:11.917 Firmware Version: 8.0.0 00:15:11.917 Recommended Arb Burst: 6 00:15:11.917 IEEE OUI Identifier: 00 54 52 00:15:11.917 Multi-path I/O 00:15:11.917 May have multiple subsystem ports: No 00:15:11.917 May have multiple controllers: No 00:15:11.917 Associated with SR-IOV VF: No 00:15:11.917 Max Data Transfer Size: 524288 00:15:11.917 Max Number of Namespaces: 256 00:15:11.917 Max Number of I/O Queues: 64 00:15:11.917 NVMe Specification Version (VS): 1.4 00:15:11.917 NVMe Specification Version (Identify): 1.4 00:15:11.917 Maximum Queue Entries: 2048 00:15:11.917 Contiguous Queues Required: Yes 00:15:11.917 Arbitration Mechanisms Supported 00:15:11.917 Weighted Round Robin: Not Supported 00:15:11.917 Vendor Specific: Not Supported 00:15:11.917 Reset Timeout: 7500 ms 00:15:11.917 Doorbell Stride: 4 bytes 00:15:11.917 NVM Subsystem Reset: Not Supported 00:15:11.917 Command Sets Supported 00:15:11.917 NVM Command Set: Supported 00:15:11.917 Boot Partition: Not Supported 00:15:11.917 Memory Page Size Minimum: 4096 bytes 00:15:11.917 Memory Page Size Maximum: 65536 bytes 00:15:11.917 Persistent Memory Region: Not Supported 00:15:11.917 Optional Asynchronous Events Supported 00:15:11.917 Namespace Attribute Notices: Supported 00:15:11.917 Firmware Activation Notices: Not Supported 00:15:11.917 ANA Change Notices: Not Supported 00:15:11.917 PLE Aggregate Log Change Notices: Not Supported 00:15:11.917 LBA Status Info Alert Notices: Not Supported 00:15:11.917 EGE Aggregate Log Change Notices: Not Supported 00:15:11.917 Normal NVM Subsystem Shutdown event: Not Supported 00:15:11.917 Zone Descriptor Change Notices: Not Supported 00:15:11.917 Discovery Log Change Notices: Not Supported 00:15:11.917 Controller Attributes 00:15:11.917 128-bit Host Identifier: Not Supported 00:15:11.917 Non-Operational Permissive Mode: Not Supported 00:15:11.917 NVM Sets: Not Supported 00:15:11.917 Read Recovery Levels: Not Supported 00:15:11.917 Endurance Groups: Not Supported 00:15:11.917 Predictable Latency Mode: Not Supported 00:15:11.917 Traffic Based Keep ALive: Not Supported 00:15:11.917 Namespace Granularity: Not Supported 00:15:11.917 SQ Associations: Not Supported 00:15:11.917 UUID List: Not Supported 00:15:11.917 Multi-Domain Subsystem: Not Supported 00:15:11.917 Fixed Capacity Management: Not Supported 00:15:11.917 Variable Capacity Management: Not Supported 00:15:11.917 Delete Endurance Group: Not Supported 00:15:11.917 Delete NVM Set: Not Supported 00:15:11.917 Extended LBA Formats Supported: Supported 00:15:11.917 Flexible Data Placement Supported: Not Supported 00:15:11.917 00:15:11.917 Controller Memory Buffer Support 00:15:11.917 ================================ 00:15:11.917 Supported: No 00:15:11.917 00:15:11.917 Persistent Memory Region Support 00:15:11.917 ================================ 00:15:11.917 Supported: No 00:15:11.917 00:15:11.917 Admin Command Set Attributes 00:15:11.917 ============================ 00:15:11.917 Security Send/Receive: Not Supported 00:15:11.917 Format NVM: Supported 00:15:11.917 Firmware Activate/Download: Not Supported 00:15:11.917 Namespace Management: Supported 00:15:11.917 Device Self-Test: Not Supported 00:15:11.917 Directives: Supported 00:15:11.917 NVMe-MI: Not Supported 00:15:11.917 Virtualization Management: Not Supported 00:15:11.917 Doorbell Buffer Config: Supported 00:15:11.917 Get LBA Status Capability: Not Supported 00:15:11.917 Command & Feature Lockdown Capability: Not Supported 00:15:11.917 Abort Command Limit: 4 00:15:11.917 Async Event Request Limit: 4 00:15:11.917 Number of Firmware Slots: N/A 00:15:11.917 Firmware Slot 1 Read-Only: N/A 00:15:11.917 Firmware Activation Without Reset: N/A 00:15:11.917 Multiple Update Detection Support: N/A 00:15:11.917 Firmware Update Granularity: No Information Provided 00:15:11.917 Per-Namespace SMART Log: Yes 00:15:11.917 Asymmetric Namespace Access Log Page: Not Supported 00:15:11.918 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:15:11.918 Command Effects Log Page: Supported 00:15:11.918 Get Log Page Extended Data: Supported 00:15:11.918 Telemetry Log Pages: Not Supported 00:15:11.918 Persistent Event Log Pages: Not Supported 00:15:11.918 Supported Log Pages Log Page: May Support 00:15:11.918 Commands Supported & Effects Log Page: Not Supported 00:15:11.918 Feature Identifiers & Effects Log Page:May Support 00:15:11.918 NVMe-MI Commands & Effects Log Page: May Support 00:15:11.918 Data Area 4 for Telemetry Log: Not Supported 00:15:11.918 Error Log Page Entries Supported: 1 00:15:11.918 Keep Alive: Not Supported 00:15:11.918 00:15:11.918 NVM Command Set Attributes 00:15:11.918 ========================== 00:15:11.918 Submission Queue Entry Size 00:15:11.918 Max: 64 00:15:11.918 Min: 64 00:15:11.918 Completion Queue Entry Size 00:15:11.918 Max: 16 00:15:11.918 Min: 16 00:15:11.918 Number of Namespaces: 256 00:15:11.918 Compare Command: Supported 00:15:11.918 Write Uncorrectable Command: Not Supported 00:15:11.918 Dataset Management Command: Supported 00:15:11.918 Write Zeroes Command: Supported 00:15:11.918 Set Features Save Field: Supported 00:15:11.918 Reservations: Not Supported 00:15:11.918 Timestamp: Supported 00:15:11.918 Copy: Supported 00:15:11.918 Volatile Write Cache: Present 00:15:11.918 Atomic Write Unit (Normal): 1 00:15:11.918 Atomic Write Unit (PFail): 1 00:15:11.918 Atomic Compare & Write Unit: 1 00:15:11.918 Fused Compare & Write: Not Supported 00:15:11.918 Scatter-Gather List 00:15:11.918 SGL Command Set: Supported 00:15:11.918 SGL Keyed: Not Supported 00:15:11.918 SGL Bit Bucket Descriptor: Not Supported 00:15:11.918 SGL Metadata Pointer: Not Supported 00:15:11.918 Oversized SGL: Not Supported 00:15:11.918 SGL Metadata Address: Not Supported 00:15:11.918 SGL Offset: Not Supported 00:15:11.918 Transport SGL Data Block: Not Supported 00:15:11.918 Replay Protected Memory Block: Not Supported 00:15:11.918 00:15:11.918 Firmware Slot Information 00:15:11.918 ========================= 00:15:11.918 Active slot: 1 00:15:11.918 Slot 1 Firmware Revision: 1.0 00:15:11.918 00:15:11.918 00:15:11.918 Commands Supported and Effects 00:15:11.918 ============================== 00:15:11.918 Admin Commands 00:15:11.918 -------------- 00:15:11.918 Delete I/O Submission Queue (00h): Supported 00:15:11.918 Create I/O Submission Queue (01h): Supported 00:15:11.918 Get Log Page (02h): Supported 00:15:11.918 Delete I/O Completion Queue (04h): Supported 00:15:11.918 Create I/O Completion Queue (05h): Supported 00:15:11.918 Identify (06h): Supported 00:15:11.918 Abort (08h): Supported 00:15:11.918 Set Features (09h): Supported 00:15:11.918 Get Features (0Ah): Supported 00:15:11.918 Asynchronous Event Request (0Ch): Supported 00:15:11.918 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:11.918 Directive Send (19h): Supported 00:15:11.918 Directive Receive (1Ah): Supported 00:15:11.918 Virtualization Management (1Ch): Supported 00:15:11.918 Doorbell Buffer Config (7Ch): Supported 00:15:11.918 Format NVM (80h): Supported LBA-Change 00:15:11.918 I/O Commands 00:15:11.918 ------------ 00:15:11.918 Flush (00h): Supported LBA-Change 00:15:11.918 Write (01h): Supported LBA-Change 00:15:11.918 Read (02h): Supported 00:15:11.918 Compare (05h): Supported 00:15:11.918 Write Zeroes (08h): Supported LBA-Change 00:15:11.918 Dataset Management (09h): Supported LBA-Change 00:15:11.918 Unknown (0Ch): Supported 00:15:11.918 Unknown (12h): Supported 00:15:11.918 Copy (19h): Supported LBA-Change 00:15:11.918 Unknown (1Dh): Supported LBA-Change 00:15:11.918 00:15:11.918 Error Log 00:15:11.918 ========= 00:15:11.918 00:15:11.918 Arbitration 00:15:11.918 =========== 00:15:11.918 Arbitration Burst: no limit 00:15:11.918 00:15:11.918 Power Management 00:15:11.918 ================ 00:15:11.918 Number of Power States: 1 00:15:11.918 Current Power State: Power State #0 00:15:11.918 Power State #0: 00:15:11.918 Max Power: 25.00 W 00:15:11.918 Non-Operational State: Operational 00:15:11.918 Entry Latency: 16 microseconds 00:15:11.918 Exit Latency: 4 microseconds 00:15:11.918 Relative Read Throughput: 0 00:15:11.918 Relative Read Latency: 0 00:15:11.918 Relative Write Throughput: 0 00:15:11.918 Relative Write Latency: 0 00:15:11.918 Idle Power: Not Reported 00:15:11.918 Active Power: Not Reported 00:15:11.918 Non-Operational Permissive Mode: Not Supported 00:15:11.918 00:15:11.918 Health Information 00:15:11.918 ================== 00:15:11.918 Critical Warnings: 00:15:11.918 Available Spare Space: OK 00:15:11.918 Temperature: OK 00:15:11.918 Device Reliability: OK 00:15:11.918 Read Only: No 00:15:11.918 Volatile Memory Backup: OK 00:15:11.918 Current Temperature: 323 Kelvin (50 Celsius) 00:15:11.918 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:11.918 Available Spare: 0% 00:15:11.918 Available Spare Threshold: 0% 00:15:11.918 Life Percentage Used: 0% 00:15:11.918 Data Units Read: 963 00:15:11.918 Data Units Written: 830 00:15:11.918 Host Read Commands: 47786 00:15:11.918 Host Write Commands: 46572 00:15:11.918 Controller Busy Time: 0 minutes 00:15:11.918 Power Cycles: 0 00:15:11.918 Power On Hours: 0 hours 00:15:11.918 Unsafe Shutdowns: 0 00:15:11.918 Unrecoverable Media Errors: 0 00:15:11.918 Lifetime Error Log Entries: 0 00:15:11.918 Warning Temperature Time: 0 minutes 00:15:11.918 Critical Temperature Time: 0 minutes 00:15:11.918 00:15:11.918 Number of Queues 00:15:11.918 ================ 00:15:11.918 Number of I/O Submission Queues: 64 00:15:11.918 Number of I/O Completion Queues: 64 00:15:11.918 00:15:11.918 ZNS Specific Controller Data 00:15:11.918 ============================ 00:15:11.918 Zone Append Size Limit: 0 00:15:11.918 00:15:11.918 00:15:11.918 Active Namespaces 00:15:11.918 ================= 00:15:11.918 Namespace ID:1 00:15:11.918 Error Recovery Timeout: Unlimited 00:15:11.918 Command Set Identifier: NVM (00h) 00:15:11.918 Deallocate: Supported 00:15:11.918 Deallocated/Unwritten Error: Supported 00:15:11.918 Deallocated Read Value: All 0x00 00:15:11.918 Deallocate in Write Zeroes: Not Supported 00:15:11.918 Deallocated Guard Field: 0xFFFF 00:15:11.918 Flush: Supported 00:15:11.918 Reservation: Not Supported 00:15:11.918 Namespace Sharing Capabilities: Private 00:15:11.918 Size (in LBAs): 1310720 (5GiB) 00:15:11.918 Capacity (in LBAs): 1310720 (5GiB) 00:15:11.918 Utilization (in LBAs): 1310720 (5GiB) 00:15:11.918 Thin Provisioning: Not Supported 00:15:11.918 Per-NS Atomic Units: No 00:15:11.918 Maximum Single Source Range Length: 128 00:15:11.918 Maximum Copy Length: 128 00:15:11.918 Maximum Source Range Count: 128 00:15:11.918 NGUID/EUI64 Never Reused: No 00:15:11.918 Namespace Write Protected: No 00:15:11.918 Number of LBA Formats: 8 00:15:11.918 Current LBA Format: LBA Format #04 00:15:11.918 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.918 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:11.918 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:11.918 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:11.918 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:11.918 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:11.918 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:11.918 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:11.918 00:15:11.918 NVM Specific Namespace Data 00:15:11.918 =========================== 00:15:11.918 Logical Block Storage Tag Mask: 0 00:15:11.918 Protection Information Capabilities: 00:15:11.918 16b Guard Protection Information Storage Tag Support: No 00:15:11.918 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:11.918 Storage Tag Check Read Support: No 00:15:11.918 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.918 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.918 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.918 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.918 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.918 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.918 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.918 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:11.918 13:09:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:15:11.918 13:09:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:15:12.177 ===================================================== 00:15:12.177 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:12.177 ===================================================== 00:15:12.177 Controller Capabilities/Features 00:15:12.177 ================================ 00:15:12.177 Vendor ID: 1b36 00:15:12.177 Subsystem Vendor ID: 1af4 00:15:12.177 Serial Number: 12342 00:15:12.177 Model Number: QEMU NVMe Ctrl 00:15:12.177 Firmware Version: 8.0.0 00:15:12.177 Recommended Arb Burst: 6 00:15:12.177 IEEE OUI Identifier: 00 54 52 00:15:12.177 Multi-path I/O 00:15:12.177 May have multiple subsystem ports: No 00:15:12.177 May have multiple controllers: No 00:15:12.177 Associated with SR-IOV VF: No 00:15:12.177 Max Data Transfer Size: 524288 00:15:12.177 Max Number of Namespaces: 256 00:15:12.177 Max Number of I/O Queues: 64 00:15:12.177 NVMe Specification Version (VS): 1.4 00:15:12.177 NVMe Specification Version (Identify): 1.4 00:15:12.177 Maximum Queue Entries: 2048 00:15:12.177 Contiguous Queues Required: Yes 00:15:12.177 Arbitration Mechanisms Supported 00:15:12.177 Weighted Round Robin: Not Supported 00:15:12.177 Vendor Specific: Not Supported 00:15:12.177 Reset Timeout: 7500 ms 00:15:12.177 Doorbell Stride: 4 bytes 00:15:12.177 NVM Subsystem Reset: Not Supported 00:15:12.177 Command Sets Supported 00:15:12.177 NVM Command Set: Supported 00:15:12.177 Boot Partition: Not Supported 00:15:12.177 Memory Page Size Minimum: 4096 bytes 00:15:12.177 Memory Page Size Maximum: 65536 bytes 00:15:12.177 Persistent Memory Region: Not Supported 00:15:12.177 Optional Asynchronous Events Supported 00:15:12.177 Namespace Attribute Notices: Supported 00:15:12.177 Firmware Activation Notices: Not Supported 00:15:12.177 ANA Change Notices: Not Supported 00:15:12.177 PLE Aggregate Log Change Notices: Not Supported 00:15:12.177 LBA Status Info Alert Notices: Not Supported 00:15:12.177 EGE Aggregate Log Change Notices: Not Supported 00:15:12.177 Normal NVM Subsystem Shutdown event: Not Supported 00:15:12.177 Zone Descriptor Change Notices: Not Supported 00:15:12.177 Discovery Log Change Notices: Not Supported 00:15:12.177 Controller Attributes 00:15:12.177 128-bit Host Identifier: Not Supported 00:15:12.177 Non-Operational Permissive Mode: Not Supported 00:15:12.177 NVM Sets: Not Supported 00:15:12.177 Read Recovery Levels: Not Supported 00:15:12.177 Endurance Groups: Not Supported 00:15:12.177 Predictable Latency Mode: Not Supported 00:15:12.177 Traffic Based Keep ALive: Not Supported 00:15:12.177 Namespace Granularity: Not Supported 00:15:12.177 SQ Associations: Not Supported 00:15:12.177 UUID List: Not Supported 00:15:12.177 Multi-Domain Subsystem: Not Supported 00:15:12.177 Fixed Capacity Management: Not Supported 00:15:12.177 Variable Capacity Management: Not Supported 00:15:12.177 Delete Endurance Group: Not Supported 00:15:12.177 Delete NVM Set: Not Supported 00:15:12.177 Extended LBA Formats Supported: Supported 00:15:12.177 Flexible Data Placement Supported: Not Supported 00:15:12.177 00:15:12.177 Controller Memory Buffer Support 00:15:12.177 ================================ 00:15:12.177 Supported: No 00:15:12.177 00:15:12.177 Persistent Memory Region Support 00:15:12.177 ================================ 00:15:12.177 Supported: No 00:15:12.177 00:15:12.177 Admin Command Set Attributes 00:15:12.177 ============================ 00:15:12.177 Security Send/Receive: Not Supported 00:15:12.177 Format NVM: Supported 00:15:12.177 Firmware Activate/Download: Not Supported 00:15:12.177 Namespace Management: Supported 00:15:12.177 Device Self-Test: Not Supported 00:15:12.177 Directives: Supported 00:15:12.177 NVMe-MI: Not Supported 00:15:12.177 Virtualization Management: Not Supported 00:15:12.177 Doorbell Buffer Config: Supported 00:15:12.177 Get LBA Status Capability: Not Supported 00:15:12.177 Command & Feature Lockdown Capability: Not Supported 00:15:12.177 Abort Command Limit: 4 00:15:12.177 Async Event Request Limit: 4 00:15:12.177 Number of Firmware Slots: N/A 00:15:12.177 Firmware Slot 1 Read-Only: N/A 00:15:12.177 Firmware Activation Without Reset: N/A 00:15:12.177 Multiple Update Detection Support: N/A 00:15:12.177 Firmware Update Granularity: No Information Provided 00:15:12.177 Per-Namespace SMART Log: Yes 00:15:12.177 Asymmetric Namespace Access Log Page: Not Supported 00:15:12.177 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:15:12.177 Command Effects Log Page: Supported 00:15:12.177 Get Log Page Extended Data: Supported 00:15:12.177 Telemetry Log Pages: Not Supported 00:15:12.177 Persistent Event Log Pages: Not Supported 00:15:12.177 Supported Log Pages Log Page: May Support 00:15:12.177 Commands Supported & Effects Log Page: Not Supported 00:15:12.177 Feature Identifiers & Effects Log Page:May Support 00:15:12.177 NVMe-MI Commands & Effects Log Page: May Support 00:15:12.177 Data Area 4 for Telemetry Log: Not Supported 00:15:12.177 Error Log Page Entries Supported: 1 00:15:12.177 Keep Alive: Not Supported 00:15:12.177 00:15:12.177 NVM Command Set Attributes 00:15:12.177 ========================== 00:15:12.177 Submission Queue Entry Size 00:15:12.177 Max: 64 00:15:12.177 Min: 64 00:15:12.177 Completion Queue Entry Size 00:15:12.177 Max: 16 00:15:12.177 Min: 16 00:15:12.177 Number of Namespaces: 256 00:15:12.177 Compare Command: Supported 00:15:12.177 Write Uncorrectable Command: Not Supported 00:15:12.177 Dataset Management Command: Supported 00:15:12.177 Write Zeroes Command: Supported 00:15:12.177 Set Features Save Field: Supported 00:15:12.177 Reservations: Not Supported 00:15:12.177 Timestamp: Supported 00:15:12.177 Copy: Supported 00:15:12.177 Volatile Write Cache: Present 00:15:12.177 Atomic Write Unit (Normal): 1 00:15:12.177 Atomic Write Unit (PFail): 1 00:15:12.177 Atomic Compare & Write Unit: 1 00:15:12.177 Fused Compare & Write: Not Supported 00:15:12.177 Scatter-Gather List 00:15:12.177 SGL Command Set: Supported 00:15:12.177 SGL Keyed: Not Supported 00:15:12.177 SGL Bit Bucket Descriptor: Not Supported 00:15:12.177 SGL Metadata Pointer: Not Supported 00:15:12.177 Oversized SGL: Not Supported 00:15:12.177 SGL Metadata Address: Not Supported 00:15:12.177 SGL Offset: Not Supported 00:15:12.177 Transport SGL Data Block: Not Supported 00:15:12.177 Replay Protected Memory Block: Not Supported 00:15:12.177 00:15:12.177 Firmware Slot Information 00:15:12.177 ========================= 00:15:12.177 Active slot: 1 00:15:12.177 Slot 1 Firmware Revision: 1.0 00:15:12.177 00:15:12.177 00:15:12.177 Commands Supported and Effects 00:15:12.177 ============================== 00:15:12.177 Admin Commands 00:15:12.177 -------------- 00:15:12.177 Delete I/O Submission Queue (00h): Supported 00:15:12.177 Create I/O Submission Queue (01h): Supported 00:15:12.177 Get Log Page (02h): Supported 00:15:12.177 Delete I/O Completion Queue (04h): Supported 00:15:12.177 Create I/O Completion Queue (05h): Supported 00:15:12.177 Identify (06h): Supported 00:15:12.177 Abort (08h): Supported 00:15:12.177 Set Features (09h): Supported 00:15:12.178 Get Features (0Ah): Supported 00:15:12.178 Asynchronous Event Request (0Ch): Supported 00:15:12.178 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:12.178 Directive Send (19h): Supported 00:15:12.178 Directive Receive (1Ah): Supported 00:15:12.178 Virtualization Management (1Ch): Supported 00:15:12.178 Doorbell Buffer Config (7Ch): Supported 00:15:12.178 Format NVM (80h): Supported LBA-Change 00:15:12.178 I/O Commands 00:15:12.178 ------------ 00:15:12.178 Flush (00h): Supported LBA-Change 00:15:12.178 Write (01h): Supported LBA-Change 00:15:12.178 Read (02h): Supported 00:15:12.178 Compare (05h): Supported 00:15:12.178 Write Zeroes (08h): Supported LBA-Change 00:15:12.178 Dataset Management (09h): Supported LBA-Change 00:15:12.178 Unknown (0Ch): Supported 00:15:12.178 Unknown (12h): Supported 00:15:12.178 Copy (19h): Supported LBA-Change 00:15:12.178 Unknown (1Dh): Supported LBA-Change 00:15:12.178 00:15:12.178 Error Log 00:15:12.178 ========= 00:15:12.178 00:15:12.178 Arbitration 00:15:12.178 =========== 00:15:12.178 Arbitration Burst: no limit 00:15:12.178 00:15:12.178 Power Management 00:15:12.178 ================ 00:15:12.178 Number of Power States: 1 00:15:12.178 Current Power State: Power State #0 00:15:12.178 Power State #0: 00:15:12.178 Max Power: 25.00 W 00:15:12.178 Non-Operational State: Operational 00:15:12.178 Entry Latency: 16 microseconds 00:15:12.178 Exit Latency: 4 microseconds 00:15:12.178 Relative Read Throughput: 0 00:15:12.178 Relative Read Latency: 0 00:15:12.178 Relative Write Throughput: 0 00:15:12.178 Relative Write Latency: 0 00:15:12.178 Idle Power: Not Reported 00:15:12.178 Active Power: Not Reported 00:15:12.178 Non-Operational Permissive Mode: Not Supported 00:15:12.178 00:15:12.178 Health Information 00:15:12.178 ================== 00:15:12.178 Critical Warnings: 00:15:12.178 Available Spare Space: OK 00:15:12.178 Temperature: OK 00:15:12.178 Device Reliability: OK 00:15:12.178 Read Only: No 00:15:12.178 Volatile Memory Backup: OK 00:15:12.178 Current Temperature: 323 Kelvin (50 Celsius) 00:15:12.178 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:12.178 Available Spare: 0% 00:15:12.178 Available Spare Threshold: 0% 00:15:12.178 Life Percentage Used: 0% 00:15:12.178 Data Units Read: 2066 00:15:12.178 Data Units Written: 1853 00:15:12.178 Host Read Commands: 99360 00:15:12.178 Host Write Commands: 97629 00:15:12.178 Controller Busy Time: 0 minutes 00:15:12.178 Power Cycles: 0 00:15:12.178 Power On Hours: 0 hours 00:15:12.178 Unsafe Shutdowns: 0 00:15:12.178 Unrecoverable Media Errors: 0 00:15:12.178 Lifetime Error Log Entries: 0 00:15:12.178 Warning Temperature Time: 0 minutes 00:15:12.178 Critical Temperature Time: 0 minutes 00:15:12.178 00:15:12.178 Number of Queues 00:15:12.178 ================ 00:15:12.178 Number of I/O Submission Queues: 64 00:15:12.178 Number of I/O Completion Queues: 64 00:15:12.178 00:15:12.178 ZNS Specific Controller Data 00:15:12.178 ============================ 00:15:12.178 Zone Append Size Limit: 0 00:15:12.178 00:15:12.178 00:15:12.178 Active Namespaces 00:15:12.178 ================= 00:15:12.178 Namespace ID:1 00:15:12.178 Error Recovery Timeout: Unlimited 00:15:12.178 Command Set Identifier: NVM (00h) 00:15:12.178 Deallocate: Supported 00:15:12.178 Deallocated/Unwritten Error: Supported 00:15:12.178 Deallocated Read Value: All 0x00 00:15:12.178 Deallocate in Write Zeroes: Not Supported 00:15:12.178 Deallocated Guard Field: 0xFFFF 00:15:12.178 Flush: Supported 00:15:12.178 Reservation: Not Supported 00:15:12.178 Namespace Sharing Capabilities: Private 00:15:12.178 Size (in LBAs): 1048576 (4GiB) 00:15:12.178 Capacity (in LBAs): 1048576 (4GiB) 00:15:12.178 Utilization (in LBAs): 1048576 (4GiB) 00:15:12.178 Thin Provisioning: Not Supported 00:15:12.178 Per-NS Atomic Units: No 00:15:12.178 Maximum Single Source Range Length: 128 00:15:12.178 Maximum Copy Length: 128 00:15:12.178 Maximum Source Range Count: 128 00:15:12.178 NGUID/EUI64 Never Reused: No 00:15:12.178 Namespace Write Protected: No 00:15:12.178 Number of LBA Formats: 8 00:15:12.178 Current LBA Format: LBA Format #04 00:15:12.178 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:12.178 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:12.178 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:12.178 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:12.178 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:12.178 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:12.178 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:12.178 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:12.178 00:15:12.178 NVM Specific Namespace Data 00:15:12.178 =========================== 00:15:12.178 Logical Block Storage Tag Mask: 0 00:15:12.178 Protection Information Capabilities: 00:15:12.178 16b Guard Protection Information Storage Tag Support: No 00:15:12.178 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:12.178 Storage Tag Check Read Support: No 00:15:12.178 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Namespace ID:2 00:15:12.178 Error Recovery Timeout: Unlimited 00:15:12.178 Command Set Identifier: NVM (00h) 00:15:12.178 Deallocate: Supported 00:15:12.178 Deallocated/Unwritten Error: Supported 00:15:12.178 Deallocated Read Value: All 0x00 00:15:12.178 Deallocate in Write Zeroes: Not Supported 00:15:12.178 Deallocated Guard Field: 0xFFFF 00:15:12.178 Flush: Supported 00:15:12.178 Reservation: Not Supported 00:15:12.178 Namespace Sharing Capabilities: Private 00:15:12.178 Size (in LBAs): 1048576 (4GiB) 00:15:12.178 Capacity (in LBAs): 1048576 (4GiB) 00:15:12.178 Utilization (in LBAs): 1048576 (4GiB) 00:15:12.178 Thin Provisioning: Not Supported 00:15:12.178 Per-NS Atomic Units: No 00:15:12.178 Maximum Single Source Range Length: 128 00:15:12.178 Maximum Copy Length: 128 00:15:12.178 Maximum Source Range Count: 128 00:15:12.178 NGUID/EUI64 Never Reused: No 00:15:12.178 Namespace Write Protected: No 00:15:12.178 Number of LBA Formats: 8 00:15:12.178 Current LBA Format: LBA Format #04 00:15:12.178 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:12.178 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:12.178 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:12.178 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:12.178 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:12.178 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:12.178 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:12.178 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:12.178 00:15:12.178 NVM Specific Namespace Data 00:15:12.178 =========================== 00:15:12.178 Logical Block Storage Tag Mask: 0 00:15:12.178 Protection Information Capabilities: 00:15:12.178 16b Guard Protection Information Storage Tag Support: No 00:15:12.178 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:12.178 Storage Tag Check Read Support: No 00:15:12.178 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.178 Namespace ID:3 00:15:12.178 Error Recovery Timeout: Unlimited 00:15:12.178 Command Set Identifier: NVM (00h) 00:15:12.178 Deallocate: Supported 00:15:12.178 Deallocated/Unwritten Error: Supported 00:15:12.178 Deallocated Read Value: All 0x00 00:15:12.178 Deallocate in Write Zeroes: Not Supported 00:15:12.178 Deallocated Guard Field: 0xFFFF 00:15:12.178 Flush: Supported 00:15:12.178 Reservation: Not Supported 00:15:12.178 Namespace Sharing Capabilities: Private 00:15:12.178 Size (in LBAs): 1048576 (4GiB) 00:15:12.178 Capacity (in LBAs): 1048576 (4GiB) 00:15:12.179 Utilization (in LBAs): 1048576 (4GiB) 00:15:12.179 Thin Provisioning: Not Supported 00:15:12.179 Per-NS Atomic Units: No 00:15:12.179 Maximum Single Source Range Length: 128 00:15:12.179 Maximum Copy Length: 128 00:15:12.179 Maximum Source Range Count: 128 00:15:12.179 NGUID/EUI64 Never Reused: No 00:15:12.179 Namespace Write Protected: No 00:15:12.179 Number of LBA Formats: 8 00:15:12.179 Current LBA Format: LBA Format #04 00:15:12.179 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:12.179 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:12.179 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:12.179 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:12.179 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:12.179 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:12.179 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:12.179 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:12.179 00:15:12.179 NVM Specific Namespace Data 00:15:12.179 =========================== 00:15:12.179 Logical Block Storage Tag Mask: 0 00:15:12.179 Protection Information Capabilities: 00:15:12.179 16b Guard Protection Information Storage Tag Support: No 00:15:12.179 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:12.179 Storage Tag Check Read Support: No 00:15:12.179 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.179 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.179 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.179 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.179 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.179 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.179 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.179 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.179 13:09:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:15:12.179 13:09:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:15:12.503 ===================================================== 00:15:12.503 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:12.503 ===================================================== 00:15:12.503 Controller Capabilities/Features 00:15:12.503 ================================ 00:15:12.503 Vendor ID: 1b36 00:15:12.503 Subsystem Vendor ID: 1af4 00:15:12.503 Serial Number: 12343 00:15:12.503 Model Number: QEMU NVMe Ctrl 00:15:12.503 Firmware Version: 8.0.0 00:15:12.503 Recommended Arb Burst: 6 00:15:12.503 IEEE OUI Identifier: 00 54 52 00:15:12.503 Multi-path I/O 00:15:12.503 May have multiple subsystem ports: No 00:15:12.503 May have multiple controllers: Yes 00:15:12.503 Associated with SR-IOV VF: No 00:15:12.503 Max Data Transfer Size: 524288 00:15:12.503 Max Number of Namespaces: 256 00:15:12.503 Max Number of I/O Queues: 64 00:15:12.503 NVMe Specification Version (VS): 1.4 00:15:12.503 NVMe Specification Version (Identify): 1.4 00:15:12.503 Maximum Queue Entries: 2048 00:15:12.503 Contiguous Queues Required: Yes 00:15:12.503 Arbitration Mechanisms Supported 00:15:12.503 Weighted Round Robin: Not Supported 00:15:12.503 Vendor Specific: Not Supported 00:15:12.503 Reset Timeout: 7500 ms 00:15:12.503 Doorbell Stride: 4 bytes 00:15:12.503 NVM Subsystem Reset: Not Supported 00:15:12.503 Command Sets Supported 00:15:12.503 NVM Command Set: Supported 00:15:12.503 Boot Partition: Not Supported 00:15:12.503 Memory Page Size Minimum: 4096 bytes 00:15:12.503 Memory Page Size Maximum: 65536 bytes 00:15:12.503 Persistent Memory Region: Not Supported 00:15:12.503 Optional Asynchronous Events Supported 00:15:12.503 Namespace Attribute Notices: Supported 00:15:12.503 Firmware Activation Notices: Not Supported 00:15:12.503 ANA Change Notices: Not Supported 00:15:12.503 PLE Aggregate Log Change Notices: Not Supported 00:15:12.503 LBA Status Info Alert Notices: Not Supported 00:15:12.503 EGE Aggregate Log Change Notices: Not Supported 00:15:12.503 Normal NVM Subsystem Shutdown event: Not Supported 00:15:12.503 Zone Descriptor Change Notices: Not Supported 00:15:12.503 Discovery Log Change Notices: Not Supported 00:15:12.503 Controller Attributes 00:15:12.503 128-bit Host Identifier: Not Supported 00:15:12.503 Non-Operational Permissive Mode: Not Supported 00:15:12.503 NVM Sets: Not Supported 00:15:12.503 Read Recovery Levels: Not Supported 00:15:12.503 Endurance Groups: Supported 00:15:12.503 Predictable Latency Mode: Not Supported 00:15:12.503 Traffic Based Keep ALive: Not Supported 00:15:12.503 Namespace Granularity: Not Supported 00:15:12.503 SQ Associations: Not Supported 00:15:12.503 UUID List: Not Supported 00:15:12.503 Multi-Domain Subsystem: Not Supported 00:15:12.503 Fixed Capacity Management: Not Supported 00:15:12.503 Variable Capacity Management: Not Supported 00:15:12.503 Delete Endurance Group: Not Supported 00:15:12.503 Delete NVM Set: Not Supported 00:15:12.503 Extended LBA Formats Supported: Supported 00:15:12.503 Flexible Data Placement Supported: Supported 00:15:12.503 00:15:12.504 Controller Memory Buffer Support 00:15:12.504 ================================ 00:15:12.504 Supported: No 00:15:12.504 00:15:12.504 Persistent Memory Region Support 00:15:12.504 ================================ 00:15:12.504 Supported: No 00:15:12.504 00:15:12.504 Admin Command Set Attributes 00:15:12.504 ============================ 00:15:12.504 Security Send/Receive: Not Supported 00:15:12.504 Format NVM: Supported 00:15:12.504 Firmware Activate/Download: Not Supported 00:15:12.504 Namespace Management: Supported 00:15:12.504 Device Self-Test: Not Supported 00:15:12.504 Directives: Supported 00:15:12.504 NVMe-MI: Not Supported 00:15:12.504 Virtualization Management: Not Supported 00:15:12.504 Doorbell Buffer Config: Supported 00:15:12.504 Get LBA Status Capability: Not Supported 00:15:12.504 Command & Feature Lockdown Capability: Not Supported 00:15:12.504 Abort Command Limit: 4 00:15:12.504 Async Event Request Limit: 4 00:15:12.504 Number of Firmware Slots: N/A 00:15:12.504 Firmware Slot 1 Read-Only: N/A 00:15:12.504 Firmware Activation Without Reset: N/A 00:15:12.504 Multiple Update Detection Support: N/A 00:15:12.504 Firmware Update Granularity: No Information Provided 00:15:12.504 Per-Namespace SMART Log: Yes 00:15:12.504 Asymmetric Namespace Access Log Page: Not Supported 00:15:12.504 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:15:12.504 Command Effects Log Page: Supported 00:15:12.504 Get Log Page Extended Data: Supported 00:15:12.504 Telemetry Log Pages: Not Supported 00:15:12.504 Persistent Event Log Pages: Not Supported 00:15:12.504 Supported Log Pages Log Page: May Support 00:15:12.504 Commands Supported & Effects Log Page: Not Supported 00:15:12.504 Feature Identifiers & Effects Log Page:May Support 00:15:12.504 NVMe-MI Commands & Effects Log Page: May Support 00:15:12.504 Data Area 4 for Telemetry Log: Not Supported 00:15:12.504 Error Log Page Entries Supported: 1 00:15:12.504 Keep Alive: Not Supported 00:15:12.504 00:15:12.504 NVM Command Set Attributes 00:15:12.504 ========================== 00:15:12.504 Submission Queue Entry Size 00:15:12.504 Max: 64 00:15:12.504 Min: 64 00:15:12.504 Completion Queue Entry Size 00:15:12.504 Max: 16 00:15:12.504 Min: 16 00:15:12.504 Number of Namespaces: 256 00:15:12.504 Compare Command: Supported 00:15:12.504 Write Uncorrectable Command: Not Supported 00:15:12.504 Dataset Management Command: Supported 00:15:12.504 Write Zeroes Command: Supported 00:15:12.504 Set Features Save Field: Supported 00:15:12.504 Reservations: Not Supported 00:15:12.504 Timestamp: Supported 00:15:12.504 Copy: Supported 00:15:12.504 Volatile Write Cache: Present 00:15:12.504 Atomic Write Unit (Normal): 1 00:15:12.504 Atomic Write Unit (PFail): 1 00:15:12.504 Atomic Compare & Write Unit: 1 00:15:12.504 Fused Compare & Write: Not Supported 00:15:12.504 Scatter-Gather List 00:15:12.504 SGL Command Set: Supported 00:15:12.504 SGL Keyed: Not Supported 00:15:12.504 SGL Bit Bucket Descriptor: Not Supported 00:15:12.504 SGL Metadata Pointer: Not Supported 00:15:12.504 Oversized SGL: Not Supported 00:15:12.504 SGL Metadata Address: Not Supported 00:15:12.504 SGL Offset: Not Supported 00:15:12.504 Transport SGL Data Block: Not Supported 00:15:12.504 Replay Protected Memory Block: Not Supported 00:15:12.504 00:15:12.504 Firmware Slot Information 00:15:12.504 ========================= 00:15:12.504 Active slot: 1 00:15:12.504 Slot 1 Firmware Revision: 1.0 00:15:12.504 00:15:12.504 00:15:12.504 Commands Supported and Effects 00:15:12.504 ============================== 00:15:12.504 Admin Commands 00:15:12.504 -------------- 00:15:12.504 Delete I/O Submission Queue (00h): Supported 00:15:12.504 Create I/O Submission Queue (01h): Supported 00:15:12.504 Get Log Page (02h): Supported 00:15:12.504 Delete I/O Completion Queue (04h): Supported 00:15:12.504 Create I/O Completion Queue (05h): Supported 00:15:12.504 Identify (06h): Supported 00:15:12.504 Abort (08h): Supported 00:15:12.504 Set Features (09h): Supported 00:15:12.504 Get Features (0Ah): Supported 00:15:12.504 Asynchronous Event Request (0Ch): Supported 00:15:12.504 Namespace Attachment (15h): Supported NS-Inventory-Change 00:15:12.504 Directive Send (19h): Supported 00:15:12.504 Directive Receive (1Ah): Supported 00:15:12.504 Virtualization Management (1Ch): Supported 00:15:12.504 Doorbell Buffer Config (7Ch): Supported 00:15:12.504 Format NVM (80h): Supported LBA-Change 00:15:12.504 I/O Commands 00:15:12.504 ------------ 00:15:12.504 Flush (00h): Supported LBA-Change 00:15:12.504 Write (01h): Supported LBA-Change 00:15:12.504 Read (02h): Supported 00:15:12.504 Compare (05h): Supported 00:15:12.504 Write Zeroes (08h): Supported LBA-Change 00:15:12.504 Dataset Management (09h): Supported LBA-Change 00:15:12.504 Unknown (0Ch): Supported 00:15:12.504 Unknown (12h): Supported 00:15:12.504 Copy (19h): Supported LBA-Change 00:15:12.504 Unknown (1Dh): Supported LBA-Change 00:15:12.504 00:15:12.504 Error Log 00:15:12.504 ========= 00:15:12.504 00:15:12.504 Arbitration 00:15:12.504 =========== 00:15:12.504 Arbitration Burst: no limit 00:15:12.504 00:15:12.504 Power Management 00:15:12.504 ================ 00:15:12.504 Number of Power States: 1 00:15:12.504 Current Power State: Power State #0 00:15:12.504 Power State #0: 00:15:12.504 Max Power: 25.00 W 00:15:12.504 Non-Operational State: Operational 00:15:12.504 Entry Latency: 16 microseconds 00:15:12.504 Exit Latency: 4 microseconds 00:15:12.504 Relative Read Throughput: 0 00:15:12.504 Relative Read Latency: 0 00:15:12.504 Relative Write Throughput: 0 00:15:12.504 Relative Write Latency: 0 00:15:12.504 Idle Power: Not Reported 00:15:12.504 Active Power: Not Reported 00:15:12.504 Non-Operational Permissive Mode: Not Supported 00:15:12.504 00:15:12.504 Health Information 00:15:12.504 ================== 00:15:12.504 Critical Warnings: 00:15:12.504 Available Spare Space: OK 00:15:12.504 Temperature: OK 00:15:12.504 Device Reliability: OK 00:15:12.504 Read Only: No 00:15:12.504 Volatile Memory Backup: OK 00:15:12.504 Current Temperature: 323 Kelvin (50 Celsius) 00:15:12.504 Temperature Threshold: 343 Kelvin (70 Celsius) 00:15:12.504 Available Spare: 0% 00:15:12.504 Available Spare Threshold: 0% 00:15:12.505 Life Percentage Used: 0% 00:15:12.505 Data Units Read: 756 00:15:12.505 Data Units Written: 685 00:15:12.505 Host Read Commands: 33707 00:15:12.505 Host Write Commands: 33130 00:15:12.505 Controller Busy Time: 0 minutes 00:15:12.505 Power Cycles: 0 00:15:12.505 Power On Hours: 0 hours 00:15:12.505 Unsafe Shutdowns: 0 00:15:12.505 Unrecoverable Media Errors: 0 00:15:12.505 Lifetime Error Log Entries: 0 00:15:12.505 Warning Temperature Time: 0 minutes 00:15:12.505 Critical Temperature Time: 0 minutes 00:15:12.505 00:15:12.505 Number of Queues 00:15:12.505 ================ 00:15:12.505 Number of I/O Submission Queues: 64 00:15:12.505 Number of I/O Completion Queues: 64 00:15:12.505 00:15:12.505 ZNS Specific Controller Data 00:15:12.505 ============================ 00:15:12.505 Zone Append Size Limit: 0 00:15:12.505 00:15:12.505 00:15:12.505 Active Namespaces 00:15:12.505 ================= 00:15:12.505 Namespace ID:1 00:15:12.505 Error Recovery Timeout: Unlimited 00:15:12.505 Command Set Identifier: NVM (00h) 00:15:12.505 Deallocate: Supported 00:15:12.505 Deallocated/Unwritten Error: Supported 00:15:12.505 Deallocated Read Value: All 0x00 00:15:12.505 Deallocate in Write Zeroes: Not Supported 00:15:12.505 Deallocated Guard Field: 0xFFFF 00:15:12.505 Flush: Supported 00:15:12.505 Reservation: Not Supported 00:15:12.505 Namespace Sharing Capabilities: Multiple Controllers 00:15:12.505 Size (in LBAs): 262144 (1GiB) 00:15:12.505 Capacity (in LBAs): 262144 (1GiB) 00:15:12.505 Utilization (in LBAs): 262144 (1GiB) 00:15:12.505 Thin Provisioning: Not Supported 00:15:12.505 Per-NS Atomic Units: No 00:15:12.505 Maximum Single Source Range Length: 128 00:15:12.505 Maximum Copy Length: 128 00:15:12.505 Maximum Source Range Count: 128 00:15:12.505 NGUID/EUI64 Never Reused: No 00:15:12.505 Namespace Write Protected: No 00:15:12.505 Endurance group ID: 1 00:15:12.505 Number of LBA Formats: 8 00:15:12.505 Current LBA Format: LBA Format #04 00:15:12.505 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:12.505 LBA Format #01: Data Size: 512 Metadata Size: 8 00:15:12.505 LBA Format #02: Data Size: 512 Metadata Size: 16 00:15:12.505 LBA Format #03: Data Size: 512 Metadata Size: 64 00:15:12.505 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:15:12.505 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:15:12.505 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:15:12.505 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:15:12.505 00:15:12.505 Get Feature FDP: 00:15:12.505 ================ 00:15:12.505 Enabled: Yes 00:15:12.505 FDP configuration index: 0 00:15:12.505 00:15:12.505 FDP configurations log page 00:15:12.505 =========================== 00:15:12.505 Number of FDP configurations: 1 00:15:12.505 Version: 0 00:15:12.505 Size: 112 00:15:12.505 FDP Configuration Descriptor: 0 00:15:12.505 Descriptor Size: 96 00:15:12.505 Reclaim Group Identifier format: 2 00:15:12.505 FDP Volatile Write Cache: Not Present 00:15:12.505 FDP Configuration: Valid 00:15:12.505 Vendor Specific Size: 0 00:15:12.505 Number of Reclaim Groups: 2 00:15:12.505 Number of Recalim Unit Handles: 8 00:15:12.505 Max Placement Identifiers: 128 00:15:12.505 Number of Namespaces Suppprted: 256 00:15:12.505 Reclaim unit Nominal Size: 6000000 bytes 00:15:12.505 Estimated Reclaim Unit Time Limit: Not Reported 00:15:12.505 RUH Desc #000: RUH Type: Initially Isolated 00:15:12.505 RUH Desc #001: RUH Type: Initially Isolated 00:15:12.505 RUH Desc #002: RUH Type: Initially Isolated 00:15:12.505 RUH Desc #003: RUH Type: Initially Isolated 00:15:12.505 RUH Desc #004: RUH Type: Initially Isolated 00:15:12.505 RUH Desc #005: RUH Type: Initially Isolated 00:15:12.505 RUH Desc #006: RUH Type: Initially Isolated 00:15:12.505 RUH Desc #007: RUH Type: Initially Isolated 00:15:12.505 00:15:12.505 FDP reclaim unit handle usage log page 00:15:12.505 ====================================== 00:15:12.505 Number of Reclaim Unit Handles: 8 00:15:12.505 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:12.505 RUH Usage Desc #001: RUH Attributes: Unused 00:15:12.505 RUH Usage Desc #002: RUH Attributes: Unused 00:15:12.505 RUH Usage Desc #003: RUH Attributes: Unused 00:15:12.505 RUH Usage Desc #004: RUH Attributes: Unused 00:15:12.505 RUH Usage Desc #005: RUH Attributes: Unused 00:15:12.505 RUH Usage Desc #006: RUH Attributes: Unused 00:15:12.505 RUH Usage Desc #007: RUH Attributes: Unused 00:15:12.505 00:15:12.505 FDP statistics log page 00:15:12.505 ======================= 00:15:12.505 Host bytes with metadata written: 426942464 00:15:12.505 Media bytes with metadata written: 426987520 00:15:12.505 Media bytes erased: 0 00:15:12.505 00:15:12.505 FDP events log page 00:15:12.505 =================== 00:15:12.505 Number of FDP events: 0 00:15:12.505 00:15:12.505 NVM Specific Namespace Data 00:15:12.505 =========================== 00:15:12.505 Logical Block Storage Tag Mask: 0 00:15:12.505 Protection Information Capabilities: 00:15:12.505 16b Guard Protection Information Storage Tag Support: No 00:15:12.505 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:15:12.505 Storage Tag Check Read Support: No 00:15:12.505 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.505 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.505 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.505 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.505 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.505 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.505 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.505 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:15:12.505 ************************************ 00:15:12.505 END TEST nvme_identify 00:15:12.505 ************************************ 00:15:12.505 00:15:12.505 real 0m1.715s 00:15:12.505 user 0m0.709s 00:15:12.505 sys 0m0.791s 00:15:12.505 13:09:18 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.505 13:09:18 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:15:12.772 13:09:19 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:15:12.772 13:09:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:12.772 13:09:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.772 13:09:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.772 ************************************ 00:15:12.772 START TEST nvme_perf 00:15:12.772 ************************************ 00:15:12.773 13:09:19 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:15:12.773 13:09:19 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:15:14.149 Initializing NVMe Controllers 00:15:14.150 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:14.150 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:14.150 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:14.150 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:14.150 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:14.150 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:14.150 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:14.150 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:14.150 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:14.150 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:14.150 Initialization complete. Launching workers. 00:15:14.150 ======================================================== 00:15:14.150 Latency(us) 00:15:14.150 Device Information : IOPS MiB/s Average min max 00:15:14.150 PCIE (0000:00:10.0) NSID 1 from core 0: 12522.46 146.75 10243.11 7926.89 45328.88 00:15:14.150 PCIE (0000:00:11.0) NSID 1 from core 0: 12522.46 146.75 10226.14 8025.25 43087.81 00:15:14.150 PCIE (0000:00:13.0) NSID 1 from core 0: 12522.46 146.75 10207.24 8018.25 42275.39 00:15:14.150 PCIE (0000:00:12.0) NSID 1 from core 0: 12522.46 146.75 10187.84 8082.26 40649.42 00:15:14.150 PCIE (0000:00:12.0) NSID 2 from core 0: 12586.35 147.50 10117.08 8008.46 32077.84 00:15:14.150 PCIE (0000:00:12.0) NSID 3 from core 0: 12586.35 147.50 10097.82 8028.24 30014.73 00:15:14.150 ======================================================== 00:15:14.150 Total : 75262.55 881.98 10179.75 7926.89 45328.88 00:15:14.150 00:15:14.150 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:15:14.150 ================================================================================= 00:15:14.150 1.00000% : 8400.524us 00:15:14.150 10.00000% : 8877.149us 00:15:14.150 25.00000% : 9234.618us 00:15:14.150 50.00000% : 9711.244us 00:15:14.150 75.00000% : 10247.447us 00:15:14.150 90.00000% : 11439.011us 00:15:14.150 95.00000% : 13285.935us 00:15:14.150 98.00000% : 14715.811us 00:15:14.150 99.00000% : 36223.535us 00:15:14.150 99.50000% : 43372.916us 00:15:14.150 99.90000% : 45041.105us 00:15:14.150 99.99000% : 45279.418us 00:15:14.150 99.99900% : 45517.731us 00:15:14.150 99.99990% : 45517.731us 00:15:14.150 99.99999% : 45517.731us 00:15:14.150 00:15:14.150 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:15:14.150 ================================================================================= 00:15:14.150 1.00000% : 8460.102us 00:15:14.150 10.00000% : 8936.727us 00:15:14.150 25.00000% : 9234.618us 00:15:14.150 50.00000% : 9651.665us 00:15:14.150 75.00000% : 10187.869us 00:15:14.150 90.00000% : 11498.589us 00:15:14.150 95.00000% : 13285.935us 00:15:14.150 98.00000% : 14537.076us 00:15:14.150 99.00000% : 34317.033us 00:15:14.150 99.50000% : 41228.102us 00:15:14.150 99.90000% : 42896.291us 00:15:14.150 99.99000% : 43134.604us 00:15:14.150 99.99900% : 43134.604us 00:15:14.150 99.99990% : 43134.604us 00:15:14.150 99.99999% : 43134.604us 00:15:14.150 00:15:14.150 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:15:14.150 ================================================================================= 00:15:14.150 1.00000% : 8460.102us 00:15:14.150 10.00000% : 8936.727us 00:15:14.150 25.00000% : 9234.618us 00:15:14.150 50.00000% : 9651.665us 00:15:14.150 75.00000% : 10187.869us 00:15:14.150 90.00000% : 11498.589us 00:15:14.150 95.00000% : 13345.513us 00:15:14.150 98.00000% : 14477.498us 00:15:14.150 99.00000% : 32648.844us 00:15:14.150 99.50000% : 40274.851us 00:15:14.150 99.90000% : 41943.040us 00:15:14.150 99.99000% : 42419.665us 00:15:14.150 99.99900% : 42419.665us 00:15:14.150 99.99990% : 42419.665us 00:15:14.150 99.99999% : 42419.665us 00:15:14.150 00:15:14.150 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:15:14.150 ================================================================================= 00:15:14.150 1.00000% : 8460.102us 00:15:14.150 10.00000% : 8936.727us 00:15:14.150 25.00000% : 9234.618us 00:15:14.150 50.00000% : 9651.665us 00:15:14.150 75.00000% : 10187.869us 00:15:14.150 90.00000% : 11558.167us 00:15:14.150 95.00000% : 13166.778us 00:15:14.150 98.00000% : 14477.498us 00:15:14.150 99.00000% : 30504.029us 00:15:14.150 99.50000% : 38606.662us 00:15:14.150 99.90000% : 40274.851us 00:15:14.150 99.99000% : 40751.476us 00:15:14.150 99.99900% : 40751.476us 00:15:14.150 99.99990% : 40751.476us 00:15:14.150 99.99999% : 40751.476us 00:15:14.150 00:15:14.150 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:15:14.150 ================================================================================= 00:15:14.150 1.00000% : 8460.102us 00:15:14.150 10.00000% : 8936.727us 00:15:14.150 25.00000% : 9234.618us 00:15:14.150 50.00000% : 9651.665us 00:15:14.150 75.00000% : 10247.447us 00:15:14.150 90.00000% : 11617.745us 00:15:14.150 95.00000% : 13047.622us 00:15:14.150 98.00000% : 14596.655us 00:15:14.150 99.00000% : 22758.865us 00:15:14.150 99.50000% : 29908.247us 00:15:14.150 99.90000% : 31695.593us 00:15:14.150 99.99000% : 32172.218us 00:15:14.150 99.99900% : 32172.218us 00:15:14.150 99.99990% : 32172.218us 00:15:14.150 99.99999% : 32172.218us 00:15:14.150 00:15:14.150 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:15:14.150 ================================================================================= 00:15:14.150 1.00000% : 8460.102us 00:15:14.150 10.00000% : 8936.727us 00:15:14.150 25.00000% : 9234.618us 00:15:14.150 50.00000% : 9651.665us 00:15:14.150 75.00000% : 10187.869us 00:15:14.150 90.00000% : 11558.167us 00:15:14.150 95.00000% : 13166.778us 00:15:14.150 98.00000% : 14596.655us 00:15:14.150 99.00000% : 20733.207us 00:15:14.150 99.50000% : 27882.589us 00:15:14.150 99.90000% : 29669.935us 00:15:14.150 99.99000% : 30027.404us 00:15:14.150 99.99900% : 30027.404us 00:15:14.150 99.99990% : 30027.404us 00:15:14.150 99.99999% : 30027.404us 00:15:14.150 00:15:14.150 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:15:14.150 ============================================================================== 00:15:14.150 Range in us Cumulative IO count 00:15:14.150 7923.898 - 7983.476: 0.0558% ( 7) 00:15:14.150 7983.476 - 8043.055: 0.1594% ( 13) 00:15:14.150 8043.055 - 8102.633: 0.2471% ( 11) 00:15:14.150 8102.633 - 8162.211: 0.3587% ( 14) 00:15:14.150 8162.211 - 8221.789: 0.5022% ( 18) 00:15:14.150 8221.789 - 8281.367: 0.6696% ( 21) 00:15:14.151 8281.367 - 8340.945: 0.8769% ( 26) 00:15:14.151 8340.945 - 8400.524: 1.1878% ( 39) 00:15:14.151 8400.524 - 8460.102: 1.5147% ( 41) 00:15:14.151 8460.102 - 8519.680: 1.9611% ( 56) 00:15:14.151 8519.680 - 8579.258: 2.7025% ( 93) 00:15:14.151 8579.258 - 8638.836: 3.7388% ( 130) 00:15:14.151 8638.836 - 8698.415: 5.1180% ( 173) 00:15:14.151 8698.415 - 8757.993: 6.9675% ( 232) 00:15:14.151 8757.993 - 8817.571: 8.7930% ( 229) 00:15:14.151 8817.571 - 8877.149: 10.8259% ( 255) 00:15:14.151 8877.149 - 8936.727: 13.2254% ( 301) 00:15:14.151 8936.727 - 8996.305: 15.7605% ( 318) 00:15:14.151 8996.305 - 9055.884: 18.5108% ( 345) 00:15:14.151 9055.884 - 9115.462: 21.2054% ( 338) 00:15:14.151 9115.462 - 9175.040: 24.0753% ( 360) 00:15:14.151 9175.040 - 9234.618: 27.0328% ( 371) 00:15:14.151 9234.618 - 9294.196: 30.2296% ( 401) 00:15:14.151 9294.196 - 9353.775: 33.3626% ( 393) 00:15:14.151 9353.775 - 9413.353: 36.6629% ( 414) 00:15:14.151 9413.353 - 9472.931: 39.9394% ( 411) 00:15:14.151 9472.931 - 9532.509: 43.1920% ( 408) 00:15:14.151 9532.509 - 9592.087: 46.4047% ( 403) 00:15:14.151 9592.087 - 9651.665: 49.6572% ( 408) 00:15:14.151 9651.665 - 9711.244: 52.8858% ( 405) 00:15:14.151 9711.244 - 9770.822: 56.0587% ( 398) 00:15:14.151 9770.822 - 9830.400: 59.2395% ( 399) 00:15:14.151 9830.400 - 9889.978: 62.3007% ( 384) 00:15:14.151 9889.978 - 9949.556: 65.1706% ( 360) 00:15:14.151 9949.556 - 10009.135: 67.8173% ( 332) 00:15:14.151 10009.135 - 10068.713: 70.2168% ( 301) 00:15:14.151 10068.713 - 10128.291: 72.2258% ( 252) 00:15:14.151 10128.291 - 10187.869: 74.0753% ( 232) 00:15:14.151 10187.869 - 10247.447: 75.6378% ( 196) 00:15:14.151 10247.447 - 10307.025: 77.1684% ( 192) 00:15:14.151 10307.025 - 10366.604: 78.4439% ( 160) 00:15:14.151 10366.604 - 10426.182: 79.5839% ( 143) 00:15:14.151 10426.182 - 10485.760: 80.6441% ( 133) 00:15:14.151 10485.760 - 10545.338: 81.5529% ( 114) 00:15:14.151 10545.338 - 10604.916: 82.4298% ( 110) 00:15:14.151 10604.916 - 10664.495: 83.2430% ( 102) 00:15:14.151 10664.495 - 10724.073: 83.9365% ( 87) 00:15:14.151 10724.073 - 10783.651: 84.6221% ( 86) 00:15:14.151 10783.651 - 10843.229: 85.2360% ( 77) 00:15:14.151 10843.229 - 10902.807: 85.8498% ( 77) 00:15:14.151 10902.807 - 10962.385: 86.3680% ( 65) 00:15:14.151 10962.385 - 11021.964: 86.8862% ( 65) 00:15:14.151 11021.964 - 11081.542: 87.4283% ( 68) 00:15:14.151 11081.542 - 11141.120: 87.9783% ( 69) 00:15:14.151 11141.120 - 11200.698: 88.4566% ( 60) 00:15:14.151 11200.698 - 11260.276: 88.9270% ( 59) 00:15:14.151 11260.276 - 11319.855: 89.3335% ( 51) 00:15:14.151 11319.855 - 11379.433: 89.7162% ( 48) 00:15:14.151 11379.433 - 11439.011: 90.0909% ( 47) 00:15:14.151 11439.011 - 11498.589: 90.4177% ( 41) 00:15:14.151 11498.589 - 11558.167: 90.7765% ( 45) 00:15:14.151 11558.167 - 11617.745: 91.1272% ( 44) 00:15:14.151 11617.745 - 11677.324: 91.3744% ( 31) 00:15:14.151 11677.324 - 11736.902: 91.6295% ( 32) 00:15:14.151 11736.902 - 11796.480: 91.8048% ( 22) 00:15:14.151 11796.480 - 11856.058: 92.0121% ( 26) 00:15:14.151 11856.058 - 11915.636: 92.1716% ( 20) 00:15:14.151 11915.636 - 11975.215: 92.3469% ( 22) 00:15:14.151 11975.215 - 12034.793: 92.5303% ( 23) 00:15:14.151 12034.793 - 12094.371: 92.7136% ( 23) 00:15:14.151 12094.371 - 12153.949: 92.8731% ( 20) 00:15:14.151 12153.949 - 12213.527: 93.0325% ( 20) 00:15:14.151 12213.527 - 12273.105: 93.1840% ( 19) 00:15:14.151 12273.105 - 12332.684: 93.2956% ( 14) 00:15:14.151 12332.684 - 12392.262: 93.4391% ( 18) 00:15:14.151 12392.262 - 12451.840: 93.5826% ( 18) 00:15:14.151 12451.840 - 12511.418: 93.7420% ( 20) 00:15:14.151 12511.418 - 12570.996: 93.8297% ( 11) 00:15:14.151 12570.996 - 12630.575: 93.9413% ( 14) 00:15:14.151 12630.575 - 12690.153: 94.0450% ( 13) 00:15:14.151 12690.153 - 12749.731: 94.1247% ( 10) 00:15:14.151 12749.731 - 12809.309: 94.2044% ( 10) 00:15:14.151 12809.309 - 12868.887: 94.2841% ( 10) 00:15:14.151 12868.887 - 12928.465: 94.3718% ( 11) 00:15:14.151 12928.465 - 12988.044: 94.4754% ( 13) 00:15:14.151 12988.044 - 13047.622: 94.6030% ( 16) 00:15:14.151 13047.622 - 13107.200: 94.7226% ( 15) 00:15:14.151 13107.200 - 13166.778: 94.8501% ( 16) 00:15:14.151 13166.778 - 13226.356: 94.9777% ( 16) 00:15:14.151 13226.356 - 13285.935: 95.0813% ( 13) 00:15:14.151 13285.935 - 13345.513: 95.1849% ( 13) 00:15:14.151 13345.513 - 13405.091: 95.2726% ( 11) 00:15:14.151 13405.091 - 13464.669: 95.3763% ( 13) 00:15:14.151 13464.669 - 13524.247: 95.4480% ( 9) 00:15:14.151 13524.247 - 13583.825: 95.5596% ( 14) 00:15:14.151 13583.825 - 13643.404: 95.6712% ( 14) 00:15:14.151 13643.404 - 13702.982: 95.7988% ( 16) 00:15:14.151 13702.982 - 13762.560: 95.9343% ( 17) 00:15:14.151 13762.560 - 13822.138: 96.0619% ( 16) 00:15:14.151 13822.138 - 13881.716: 96.2133% ( 19) 00:15:14.151 13881.716 - 13941.295: 96.3728% ( 20) 00:15:14.151 13941.295 - 14000.873: 96.5402% ( 21) 00:15:14.151 14000.873 - 14060.451: 96.6916% ( 19) 00:15:14.151 14060.451 - 14120.029: 96.8431% ( 19) 00:15:14.151 14120.029 - 14179.607: 96.9866% ( 18) 00:15:14.151 14179.607 - 14239.185: 97.1221% ( 17) 00:15:14.151 14239.185 - 14298.764: 97.2497% ( 16) 00:15:14.151 14298.764 - 14358.342: 97.3772% ( 16) 00:15:14.151 14358.342 - 14417.920: 97.4968% ( 15) 00:15:14.151 14417.920 - 14477.498: 97.6244% ( 16) 00:15:14.151 14477.498 - 14537.076: 97.7280% ( 13) 00:15:14.151 14537.076 - 14596.655: 97.8555% ( 16) 00:15:14.151 14596.655 - 14656.233: 97.9831% ( 16) 00:15:14.151 14656.233 - 14715.811: 98.0947% ( 14) 00:15:14.151 14715.811 - 14775.389: 98.2143% ( 15) 00:15:14.151 14775.389 - 14834.967: 98.3179% ( 13) 00:15:14.151 14834.967 - 14894.545: 98.4216% ( 13) 00:15:14.151 14894.545 - 14954.124: 98.5172% ( 12) 00:15:14.151 14954.124 - 15013.702: 98.5651% ( 6) 00:15:14.151 15013.702 - 15073.280: 98.6129% ( 6) 00:15:14.151 15073.280 - 15132.858: 98.6527% ( 5) 00:15:14.151 15132.858 - 15192.436: 98.7085% ( 7) 00:15:14.151 15192.436 - 15252.015: 98.7404% ( 4) 00:15:14.151 15252.015 - 15371.171: 98.8281% ( 11) 00:15:14.151 15371.171 - 15490.327: 98.8680% ( 5) 00:15:14.151 15490.327 - 15609.484: 98.9158% ( 6) 00:15:14.151 15609.484 - 15728.640: 98.9557% ( 5) 00:15:14.151 15728.640 - 15847.796: 98.9796% ( 3) 00:15:14.151 35746.909 - 35985.222: 98.9876% ( 1) 00:15:14.151 35985.222 - 36223.535: 99.0274% ( 5) 00:15:14.151 36223.535 - 36461.847: 99.0832% ( 7) 00:15:14.151 36461.847 - 36700.160: 99.1311% ( 6) 00:15:14.151 36700.160 - 36938.473: 99.1869% ( 7) 00:15:14.151 36938.473 - 37176.785: 99.2267% ( 5) 00:15:14.151 37176.785 - 37415.098: 99.2825% ( 7) 00:15:14.152 37415.098 - 37653.411: 99.3383% ( 7) 00:15:14.152 37653.411 - 37891.724: 99.3862% ( 6) 00:15:14.152 37891.724 - 38130.036: 99.4340% ( 6) 00:15:14.152 38130.036 - 38368.349: 99.4898% ( 7) 00:15:14.152 43134.604 - 43372.916: 99.5536% ( 8) 00:15:14.152 43372.916 - 43611.229: 99.6173% ( 8) 00:15:14.152 43611.229 - 43849.542: 99.6572% ( 5) 00:15:14.152 43849.542 - 44087.855: 99.7130% ( 7) 00:15:14.152 44087.855 - 44326.167: 99.7688% ( 7) 00:15:14.152 44326.167 - 44564.480: 99.8246% ( 7) 00:15:14.152 44564.480 - 44802.793: 99.8804% ( 7) 00:15:14.152 44802.793 - 45041.105: 99.9362% ( 7) 00:15:14.152 45041.105 - 45279.418: 99.9920% ( 7) 00:15:14.152 45279.418 - 45517.731: 100.0000% ( 1) 00:15:14.152 00:15:14.152 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:15:14.152 ============================================================================== 00:15:14.152 Range in us Cumulative IO count 00:15:14.152 7983.476 - 8043.055: 0.0080% ( 1) 00:15:14.152 8043.055 - 8102.633: 0.0478% ( 5) 00:15:14.152 8102.633 - 8162.211: 0.1435% ( 12) 00:15:14.152 8162.211 - 8221.789: 0.2631% ( 15) 00:15:14.152 8221.789 - 8281.367: 0.4225% ( 20) 00:15:14.152 8281.367 - 8340.945: 0.5899% ( 21) 00:15:14.152 8340.945 - 8400.524: 0.7892% ( 25) 00:15:14.152 8400.524 - 8460.102: 1.0364% ( 31) 00:15:14.152 8460.102 - 8519.680: 1.3871% ( 44) 00:15:14.152 8519.680 - 8579.258: 1.8256% ( 55) 00:15:14.152 8579.258 - 8638.836: 2.3996% ( 72) 00:15:14.152 8638.836 - 8698.415: 3.3084% ( 114) 00:15:14.152 8698.415 - 8757.993: 4.5041% ( 150) 00:15:14.152 8757.993 - 8817.571: 6.2899% ( 224) 00:15:14.152 8817.571 - 8877.149: 8.3546% ( 259) 00:15:14.152 8877.149 - 8936.727: 10.7781% ( 304) 00:15:14.152 8936.727 - 8996.305: 13.3689% ( 325) 00:15:14.152 8996.305 - 9055.884: 16.3026% ( 368) 00:15:14.152 9055.884 - 9115.462: 19.4037% ( 389) 00:15:14.152 9115.462 - 9175.040: 22.6483% ( 407) 00:15:14.152 9175.040 - 9234.618: 25.9885% ( 419) 00:15:14.152 9234.618 - 9294.196: 29.3686% ( 424) 00:15:14.152 9294.196 - 9353.775: 32.9480% ( 449) 00:15:14.152 9353.775 - 9413.353: 36.6231% ( 461) 00:15:14.152 9413.353 - 9472.931: 40.3619% ( 469) 00:15:14.152 9472.931 - 9532.509: 44.0210% ( 459) 00:15:14.152 9532.509 - 9592.087: 47.8555% ( 481) 00:15:14.152 9592.087 - 9651.665: 51.5944% ( 469) 00:15:14.152 9651.665 - 9711.244: 55.3492% ( 471) 00:15:14.152 9711.244 - 9770.822: 58.7691% ( 429) 00:15:14.152 9770.822 - 9830.400: 62.0217% ( 408) 00:15:14.152 9830.400 - 9889.978: 65.0829% ( 384) 00:15:14.152 9889.978 - 9949.556: 67.7455% ( 334) 00:15:14.152 9949.556 - 10009.135: 70.1451% ( 301) 00:15:14.152 10009.135 - 10068.713: 72.1540% ( 252) 00:15:14.152 10068.713 - 10128.291: 73.8202% ( 209) 00:15:14.152 10128.291 - 10187.869: 75.3268% ( 189) 00:15:14.152 10187.869 - 10247.447: 76.5705% ( 156) 00:15:14.152 10247.447 - 10307.025: 77.7663% ( 150) 00:15:14.152 10307.025 - 10366.604: 78.7468% ( 123) 00:15:14.152 10366.604 - 10426.182: 79.7274% ( 123) 00:15:14.152 10426.182 - 10485.760: 80.6202% ( 112) 00:15:14.152 10485.760 - 10545.338: 81.4732% ( 107) 00:15:14.152 10545.338 - 10604.916: 82.2305% ( 95) 00:15:14.152 10604.916 - 10664.495: 82.9321% ( 88) 00:15:14.152 10664.495 - 10724.073: 83.5778% ( 81) 00:15:14.152 10724.073 - 10783.651: 84.2315% ( 82) 00:15:14.152 10783.651 - 10843.229: 84.8374% ( 76) 00:15:14.152 10843.229 - 10902.807: 85.4193% ( 73) 00:15:14.152 10902.807 - 10962.385: 86.0332% ( 77) 00:15:14.152 10962.385 - 11021.964: 86.5992% ( 71) 00:15:14.152 11021.964 - 11081.542: 87.1732% ( 72) 00:15:14.152 11081.542 - 11141.120: 87.6993% ( 66) 00:15:14.152 11141.120 - 11200.698: 88.1617% ( 58) 00:15:14.152 11200.698 - 11260.276: 88.6081% ( 56) 00:15:14.152 11260.276 - 11319.855: 89.0306% ( 53) 00:15:14.152 11319.855 - 11379.433: 89.4930% ( 58) 00:15:14.152 11379.433 - 11439.011: 89.9075% ( 52) 00:15:14.152 11439.011 - 11498.589: 90.3061% ( 50) 00:15:14.152 11498.589 - 11558.167: 90.6409% ( 42) 00:15:14.152 11558.167 - 11617.745: 90.9279% ( 36) 00:15:14.152 11617.745 - 11677.324: 91.1671% ( 30) 00:15:14.152 11677.324 - 11736.902: 91.3903% ( 28) 00:15:14.152 11736.902 - 11796.480: 91.6055% ( 27) 00:15:14.152 11796.480 - 11856.058: 91.7809% ( 22) 00:15:14.152 11856.058 - 11915.636: 91.9483% ( 21) 00:15:14.152 11915.636 - 11975.215: 92.0759% ( 16) 00:15:14.152 11975.215 - 12034.793: 92.1875% ( 14) 00:15:14.152 12034.793 - 12094.371: 92.2991% ( 14) 00:15:14.152 12094.371 - 12153.949: 92.4107% ( 14) 00:15:14.152 12153.949 - 12213.527: 92.5303% ( 15) 00:15:14.152 12213.527 - 12273.105: 92.6499% ( 15) 00:15:14.152 12273.105 - 12332.684: 92.7934% ( 18) 00:15:14.152 12332.684 - 12392.262: 92.9129% ( 15) 00:15:14.152 12392.262 - 12451.840: 93.0564% ( 18) 00:15:14.152 12451.840 - 12511.418: 93.1521% ( 12) 00:15:14.152 12511.418 - 12570.996: 93.2637% ( 14) 00:15:14.152 12570.996 - 12630.575: 93.3992% ( 17) 00:15:14.152 12630.575 - 12690.153: 93.5427% ( 18) 00:15:14.152 12690.153 - 12749.731: 93.6862% ( 18) 00:15:14.152 12749.731 - 12809.309: 93.8377% ( 19) 00:15:14.152 12809.309 - 12868.887: 93.9573% ( 15) 00:15:14.152 12868.887 - 12928.465: 94.0928% ( 17) 00:15:14.152 12928.465 - 12988.044: 94.2124% ( 15) 00:15:14.152 12988.044 - 13047.622: 94.3798% ( 21) 00:15:14.152 13047.622 - 13107.200: 94.5392% ( 20) 00:15:14.152 13107.200 - 13166.778: 94.7226% ( 23) 00:15:14.152 13166.778 - 13226.356: 94.8661% ( 18) 00:15:14.152 13226.356 - 13285.935: 95.0255% ( 20) 00:15:14.152 13285.935 - 13345.513: 95.1451% ( 15) 00:15:14.152 13345.513 - 13405.091: 95.2886% ( 18) 00:15:14.152 13405.091 - 13464.669: 95.4241% ( 17) 00:15:14.152 13464.669 - 13524.247: 95.5756% ( 19) 00:15:14.152 13524.247 - 13583.825: 95.7191% ( 18) 00:15:14.152 13583.825 - 13643.404: 95.8466% ( 16) 00:15:14.152 13643.404 - 13702.982: 95.9901% ( 18) 00:15:14.152 13702.982 - 13762.560: 96.1496% ( 20) 00:15:14.152 13762.560 - 13822.138: 96.2691% ( 15) 00:15:14.152 13822.138 - 13881.716: 96.4126% ( 18) 00:15:14.152 13881.716 - 13941.295: 96.5482% ( 17) 00:15:14.152 13941.295 - 14000.873: 96.6916% ( 18) 00:15:14.152 14000.873 - 14060.451: 96.8431% ( 19) 00:15:14.152 14060.451 - 14120.029: 97.0026% ( 20) 00:15:14.152 14120.029 - 14179.607: 97.1859% ( 23) 00:15:14.152 14179.607 - 14239.185: 97.3533% ( 21) 00:15:14.152 14239.185 - 14298.764: 97.4888% ( 17) 00:15:14.152 14298.764 - 14358.342: 97.6403% ( 19) 00:15:14.152 14358.342 - 14417.920: 97.7838% ( 18) 00:15:14.152 14417.920 - 14477.498: 97.9193% ( 17) 00:15:14.152 14477.498 - 14537.076: 98.0628% ( 18) 00:15:14.153 14537.076 - 14596.655: 98.2063% ( 18) 00:15:14.153 14596.655 - 14656.233: 98.3020% ( 12) 00:15:14.153 14656.233 - 14715.811: 98.3976% ( 12) 00:15:14.153 14715.811 - 14775.389: 98.4933% ( 12) 00:15:14.153 14775.389 - 14834.967: 98.5969% ( 13) 00:15:14.153 14834.967 - 14894.545: 98.6767% ( 10) 00:15:14.153 14894.545 - 14954.124: 98.7564% ( 10) 00:15:14.153 14954.124 - 15013.702: 98.8202% ( 8) 00:15:14.153 15013.702 - 15073.280: 98.8999% ( 10) 00:15:14.153 15073.280 - 15132.858: 98.9397% ( 5) 00:15:14.153 15132.858 - 15192.436: 98.9636% ( 3) 00:15:14.153 15192.436 - 15252.015: 98.9796% ( 2) 00:15:14.153 34078.720 - 34317.033: 99.0274% ( 6) 00:15:14.153 34317.033 - 34555.345: 99.0832% ( 7) 00:15:14.153 34555.345 - 34793.658: 99.1470% ( 8) 00:15:14.153 34793.658 - 35031.971: 99.2028% ( 7) 00:15:14.153 35031.971 - 35270.284: 99.2666% ( 8) 00:15:14.153 35270.284 - 35508.596: 99.3224% ( 7) 00:15:14.153 35508.596 - 35746.909: 99.3782% ( 7) 00:15:14.153 35746.909 - 35985.222: 99.4499% ( 9) 00:15:14.153 35985.222 - 36223.535: 99.4898% ( 5) 00:15:14.153 40989.789 - 41228.102: 99.5297% ( 5) 00:15:14.153 41228.102 - 41466.415: 99.5934% ( 8) 00:15:14.153 41466.415 - 41704.727: 99.6413% ( 6) 00:15:14.153 41704.727 - 41943.040: 99.6971% ( 7) 00:15:14.153 41943.040 - 42181.353: 99.7608% ( 8) 00:15:14.153 42181.353 - 42419.665: 99.8166% ( 7) 00:15:14.153 42419.665 - 42657.978: 99.8804% ( 8) 00:15:14.153 42657.978 - 42896.291: 99.9442% ( 8) 00:15:14.153 42896.291 - 43134.604: 100.0000% ( 7) 00:15:14.153 00:15:14.153 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:15:14.153 ============================================================================== 00:15:14.153 Range in us Cumulative IO count 00:15:14.153 7983.476 - 8043.055: 0.0159% ( 2) 00:15:14.153 8043.055 - 8102.633: 0.0558% ( 5) 00:15:14.153 8102.633 - 8162.211: 0.1435% ( 11) 00:15:14.153 8162.211 - 8221.789: 0.2950% ( 19) 00:15:14.153 8221.789 - 8281.367: 0.4225% ( 16) 00:15:14.153 8281.367 - 8340.945: 0.5740% ( 19) 00:15:14.153 8340.945 - 8400.524: 0.8131% ( 30) 00:15:14.153 8400.524 - 8460.102: 1.0762% ( 33) 00:15:14.153 8460.102 - 8519.680: 1.3791% ( 38) 00:15:14.153 8519.680 - 8579.258: 1.7459% ( 46) 00:15:14.153 8579.258 - 8638.836: 2.3597% ( 77) 00:15:14.153 8638.836 - 8698.415: 3.2127% ( 107) 00:15:14.153 8698.415 - 8757.993: 4.3925% ( 148) 00:15:14.153 8757.993 - 8817.571: 5.9232% ( 192) 00:15:14.153 8817.571 - 8877.149: 7.9321% ( 252) 00:15:14.153 8877.149 - 8936.727: 10.2439% ( 290) 00:15:14.153 8936.727 - 8996.305: 12.9066% ( 334) 00:15:14.153 8996.305 - 9055.884: 15.7924% ( 362) 00:15:14.153 9055.884 - 9115.462: 18.9493% ( 396) 00:15:14.153 9115.462 - 9175.040: 22.2577% ( 415) 00:15:14.153 9175.040 - 9234.618: 25.6776% ( 429) 00:15:14.153 9234.618 - 9294.196: 29.1374% ( 434) 00:15:14.153 9294.196 - 9353.775: 32.5893% ( 433) 00:15:14.153 9353.775 - 9413.353: 36.1846% ( 451) 00:15:14.153 9413.353 - 9472.931: 39.8198% ( 456) 00:15:14.153 9472.931 - 9532.509: 43.4790% ( 459) 00:15:14.153 9532.509 - 9592.087: 47.1859% ( 465) 00:15:14.153 9592.087 - 9651.665: 51.0124% ( 480) 00:15:14.153 9651.665 - 9711.244: 54.7991% ( 475) 00:15:14.153 9711.244 - 9770.822: 58.4184% ( 454) 00:15:14.153 9770.822 - 9830.400: 61.7905% ( 423) 00:15:14.153 9830.400 - 9889.978: 65.0271% ( 406) 00:15:14.153 9889.978 - 9949.556: 67.7854% ( 346) 00:15:14.153 9949.556 - 10009.135: 70.2408% ( 308) 00:15:14.153 10009.135 - 10068.713: 72.2417% ( 251) 00:15:14.153 10068.713 - 10128.291: 74.0753% ( 230) 00:15:14.153 10128.291 - 10187.869: 75.6218% ( 194) 00:15:14.153 10187.869 - 10247.447: 76.9930% ( 172) 00:15:14.153 10247.447 - 10307.025: 78.1330% ( 143) 00:15:14.153 10307.025 - 10366.604: 79.2251% ( 137) 00:15:14.153 10366.604 - 10426.182: 80.2057% ( 123) 00:15:14.153 10426.182 - 10485.760: 81.1464% ( 118) 00:15:14.153 10485.760 - 10545.338: 81.9994% ( 107) 00:15:14.153 10545.338 - 10604.916: 82.7647% ( 96) 00:15:14.153 10604.916 - 10664.495: 83.4104% ( 81) 00:15:14.153 10664.495 - 10724.073: 84.0482% ( 80) 00:15:14.153 10724.073 - 10783.651: 84.6142% ( 71) 00:15:14.153 10783.651 - 10843.229: 85.1483% ( 67) 00:15:14.153 10843.229 - 10902.807: 85.6744% ( 66) 00:15:14.153 10902.807 - 10962.385: 86.1846% ( 64) 00:15:14.153 10962.385 - 11021.964: 86.6948% ( 64) 00:15:14.153 11021.964 - 11081.542: 87.2130% ( 65) 00:15:14.153 11081.542 - 11141.120: 87.6754% ( 58) 00:15:14.153 11141.120 - 11200.698: 88.1138% ( 55) 00:15:14.153 11200.698 - 11260.276: 88.5523% ( 55) 00:15:14.153 11260.276 - 11319.855: 89.0067% ( 57) 00:15:14.153 11319.855 - 11379.433: 89.4372% ( 54) 00:15:14.153 11379.433 - 11439.011: 89.8677% ( 54) 00:15:14.153 11439.011 - 11498.589: 90.2583% ( 49) 00:15:14.153 11498.589 - 11558.167: 90.5772% ( 40) 00:15:14.153 11558.167 - 11617.745: 90.8402% ( 33) 00:15:14.153 11617.745 - 11677.324: 91.0635% ( 28) 00:15:14.153 11677.324 - 11736.902: 91.3345% ( 34) 00:15:14.153 11736.902 - 11796.480: 91.5816% ( 31) 00:15:14.153 11796.480 - 11856.058: 91.8048% ( 28) 00:15:14.153 11856.058 - 11915.636: 92.0839% ( 35) 00:15:14.153 11915.636 - 11975.215: 92.3071% ( 28) 00:15:14.153 11975.215 - 12034.793: 92.5542% ( 31) 00:15:14.153 12034.793 - 12094.371: 92.7615% ( 26) 00:15:14.153 12094.371 - 12153.949: 92.9847% ( 28) 00:15:14.153 12153.949 - 12213.527: 93.1601% ( 22) 00:15:14.153 12213.527 - 12273.105: 93.3673% ( 26) 00:15:14.153 12273.105 - 12332.684: 93.5427% ( 22) 00:15:14.153 12332.684 - 12392.262: 93.6783% ( 17) 00:15:14.153 12392.262 - 12451.840: 93.8058% ( 16) 00:15:14.153 12451.840 - 12511.418: 93.9254% ( 15) 00:15:14.153 12511.418 - 12570.996: 94.0529% ( 16) 00:15:14.153 12570.996 - 12630.575: 94.1725% ( 15) 00:15:14.153 12630.575 - 12690.153: 94.2682% ( 12) 00:15:14.153 12690.153 - 12749.731: 94.3718% ( 13) 00:15:14.153 12749.731 - 12809.309: 94.4436% ( 9) 00:15:14.153 12809.309 - 12868.887: 94.5233% ( 10) 00:15:14.153 12868.887 - 12928.465: 94.5631% ( 5) 00:15:14.153 12928.465 - 12988.044: 94.5950% ( 4) 00:15:14.153 12988.044 - 13047.622: 94.6189% ( 3) 00:15:14.153 13047.622 - 13107.200: 94.6747% ( 7) 00:15:14.153 13107.200 - 13166.778: 94.7624% ( 11) 00:15:14.153 13166.778 - 13226.356: 94.8740% ( 14) 00:15:14.153 13226.356 - 13285.935: 94.9697% ( 12) 00:15:14.153 13285.935 - 13345.513: 95.0893% ( 15) 00:15:14.153 13345.513 - 13405.091: 95.2009% ( 14) 00:15:14.153 13405.091 - 13464.669: 95.3444% ( 18) 00:15:14.153 13464.669 - 13524.247: 95.4719% ( 16) 00:15:14.153 13524.247 - 13583.825: 95.6154% ( 18) 00:15:14.153 13583.825 - 13643.404: 95.7749% ( 20) 00:15:14.154 13643.404 - 13702.982: 95.9343% ( 20) 00:15:14.154 13702.982 - 13762.560: 96.1017% ( 21) 00:15:14.154 13762.560 - 13822.138: 96.2851% ( 23) 00:15:14.154 13822.138 - 13881.716: 96.4923% ( 26) 00:15:14.154 13881.716 - 13941.295: 96.6677% ( 22) 00:15:14.154 13941.295 - 14000.873: 96.8351% ( 21) 00:15:14.154 14000.873 - 14060.451: 97.0265% ( 24) 00:15:14.154 14060.451 - 14120.029: 97.1779% ( 19) 00:15:14.154 14120.029 - 14179.607: 97.3693% ( 24) 00:15:14.154 14179.607 - 14239.185: 97.5367% ( 21) 00:15:14.154 14239.185 - 14298.764: 97.6881% ( 19) 00:15:14.154 14298.764 - 14358.342: 97.8157% ( 16) 00:15:14.154 14358.342 - 14417.920: 97.9592% ( 18) 00:15:14.154 14417.920 - 14477.498: 98.0628% ( 13) 00:15:14.154 14477.498 - 14537.076: 98.1904% ( 16) 00:15:14.154 14537.076 - 14596.655: 98.2940% ( 13) 00:15:14.154 14596.655 - 14656.233: 98.3976% ( 13) 00:15:14.154 14656.233 - 14715.811: 98.4933% ( 12) 00:15:14.154 14715.811 - 14775.389: 98.5969% ( 13) 00:15:14.154 14775.389 - 14834.967: 98.6846% ( 11) 00:15:14.154 14834.967 - 14894.545: 98.7564% ( 9) 00:15:14.154 14894.545 - 14954.124: 98.8042% ( 6) 00:15:14.154 14954.124 - 15013.702: 98.8361% ( 4) 00:15:14.154 15013.702 - 15073.280: 98.8600% ( 3) 00:15:14.154 15073.280 - 15132.858: 98.8919% ( 4) 00:15:14.154 15132.858 - 15192.436: 98.9158% ( 3) 00:15:14.154 15192.436 - 15252.015: 98.9397% ( 3) 00:15:14.154 15252.015 - 15371.171: 98.9796% ( 5) 00:15:14.154 32410.531 - 32648.844: 99.0274% ( 6) 00:15:14.154 32648.844 - 32887.156: 99.0753% ( 6) 00:15:14.154 32887.156 - 33125.469: 99.1231% ( 6) 00:15:14.154 33125.469 - 33363.782: 99.1789% ( 7) 00:15:14.154 33363.782 - 33602.095: 99.2347% ( 7) 00:15:14.154 33602.095 - 33840.407: 99.2825% ( 6) 00:15:14.154 33840.407 - 34078.720: 99.3383% ( 7) 00:15:14.154 34078.720 - 34317.033: 99.3941% ( 7) 00:15:14.154 34317.033 - 34555.345: 99.4499% ( 7) 00:15:14.154 34555.345 - 34793.658: 99.4898% ( 5) 00:15:14.154 40036.538 - 40274.851: 99.5376% ( 6) 00:15:14.154 40274.851 - 40513.164: 99.5934% ( 7) 00:15:14.154 40513.164 - 40751.476: 99.6413% ( 6) 00:15:14.154 40751.476 - 40989.789: 99.7050% ( 8) 00:15:14.154 40989.789 - 41228.102: 99.7529% ( 6) 00:15:14.154 41228.102 - 41466.415: 99.8166% ( 8) 00:15:14.154 41466.415 - 41704.727: 99.8645% ( 6) 00:15:14.154 41704.727 - 41943.040: 99.9203% ( 7) 00:15:14.154 41943.040 - 42181.353: 99.9761% ( 7) 00:15:14.154 42181.353 - 42419.665: 100.0000% ( 3) 00:15:14.154 00:15:14.154 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:15:14.154 ============================================================================== 00:15:14.154 Range in us Cumulative IO count 00:15:14.154 8043.055 - 8102.633: 0.0319% ( 4) 00:15:14.154 8102.633 - 8162.211: 0.0797% ( 6) 00:15:14.154 8162.211 - 8221.789: 0.2152% ( 17) 00:15:14.154 8221.789 - 8281.367: 0.3667% ( 19) 00:15:14.154 8281.367 - 8340.945: 0.5660% ( 25) 00:15:14.154 8340.945 - 8400.524: 0.7892% ( 28) 00:15:14.154 8400.524 - 8460.102: 1.0762% ( 36) 00:15:14.154 8460.102 - 8519.680: 1.3951% ( 40) 00:15:14.154 8519.680 - 8579.258: 1.8096% ( 52) 00:15:14.154 8579.258 - 8638.836: 2.3836% ( 72) 00:15:14.154 8638.836 - 8698.415: 3.3084% ( 116) 00:15:14.154 8698.415 - 8757.993: 4.5599% ( 157) 00:15:14.154 8757.993 - 8817.571: 6.1942% ( 205) 00:15:14.154 8817.571 - 8877.149: 8.1633% ( 247) 00:15:14.154 8877.149 - 8936.727: 10.4831% ( 291) 00:15:14.154 8936.727 - 8996.305: 13.2254% ( 344) 00:15:14.154 8996.305 - 9055.884: 16.2309% ( 377) 00:15:14.154 9055.884 - 9115.462: 19.3957% ( 397) 00:15:14.154 9115.462 - 9175.040: 22.8635% ( 435) 00:15:14.154 9175.040 - 9234.618: 26.2755% ( 428) 00:15:14.154 9234.618 - 9294.196: 29.7433% ( 435) 00:15:14.154 9294.196 - 9353.775: 33.2988% ( 446) 00:15:14.154 9353.775 - 9413.353: 36.9340% ( 456) 00:15:14.154 9413.353 - 9472.931: 40.5612% ( 455) 00:15:14.154 9472.931 - 9532.509: 44.2761% ( 466) 00:15:14.154 9532.509 - 9592.087: 47.9512% ( 461) 00:15:14.154 9592.087 - 9651.665: 51.6821% ( 468) 00:15:14.154 9651.665 - 9711.244: 55.4050% ( 467) 00:15:14.154 9711.244 - 9770.822: 58.8010% ( 426) 00:15:14.154 9770.822 - 9830.400: 62.0297% ( 405) 00:15:14.154 9830.400 - 9889.978: 65.0510% ( 379) 00:15:14.154 9889.978 - 9949.556: 67.6818% ( 330) 00:15:14.154 9949.556 - 10009.135: 70.0574% ( 298) 00:15:14.154 10009.135 - 10068.713: 72.1062% ( 257) 00:15:14.154 10068.713 - 10128.291: 73.7643% ( 208) 00:15:14.154 10128.291 - 10187.869: 75.1754% ( 177) 00:15:14.154 10187.869 - 10247.447: 76.4908% ( 165) 00:15:14.154 10247.447 - 10307.025: 77.6945% ( 151) 00:15:14.154 10307.025 - 10366.604: 78.7707% ( 135) 00:15:14.154 10366.604 - 10426.182: 79.7752% ( 126) 00:15:14.154 10426.182 - 10485.760: 80.7079% ( 117) 00:15:14.154 10485.760 - 10545.338: 81.5928% ( 111) 00:15:14.154 10545.338 - 10604.916: 82.3740% ( 98) 00:15:14.154 10604.916 - 10664.495: 83.1314% ( 95) 00:15:14.154 10664.495 - 10724.073: 83.7612% ( 79) 00:15:14.154 10724.073 - 10783.651: 84.3511% ( 74) 00:15:14.154 10783.651 - 10843.229: 84.9091% ( 70) 00:15:14.154 10843.229 - 10902.807: 85.4592% ( 69) 00:15:14.154 10902.807 - 10962.385: 86.0013% ( 68) 00:15:14.154 10962.385 - 11021.964: 86.4876% ( 61) 00:15:14.154 11021.964 - 11081.542: 86.9659% ( 60) 00:15:14.154 11081.542 - 11141.120: 87.4362% ( 59) 00:15:14.154 11141.120 - 11200.698: 87.9145% ( 60) 00:15:14.154 11200.698 - 11260.276: 88.2892% ( 47) 00:15:14.154 11260.276 - 11319.855: 88.6958% ( 51) 00:15:14.154 11319.855 - 11379.433: 89.0784% ( 48) 00:15:14.154 11379.433 - 11439.011: 89.4292% ( 44) 00:15:14.154 11439.011 - 11498.589: 89.8039% ( 47) 00:15:14.154 11498.589 - 11558.167: 90.1068% ( 38) 00:15:14.154 11558.167 - 11617.745: 90.3938% ( 36) 00:15:14.154 11617.745 - 11677.324: 90.7207% ( 41) 00:15:14.154 11677.324 - 11736.902: 91.0555% ( 42) 00:15:14.154 11736.902 - 11796.480: 91.3584% ( 38) 00:15:14.154 11796.480 - 11856.058: 91.6614% ( 38) 00:15:14.154 11856.058 - 11915.636: 91.9005% ( 30) 00:15:14.154 11915.636 - 11975.215: 92.1397% ( 30) 00:15:14.154 11975.215 - 12034.793: 92.3709% ( 29) 00:15:14.154 12034.793 - 12094.371: 92.6020% ( 29) 00:15:14.154 12094.371 - 12153.949: 92.8093% ( 26) 00:15:14.154 12153.949 - 12213.527: 93.0246% ( 27) 00:15:14.154 12213.527 - 12273.105: 93.2239% ( 25) 00:15:14.154 12273.105 - 12332.684: 93.4152% ( 24) 00:15:14.154 12332.684 - 12392.262: 93.5826% ( 21) 00:15:14.154 12392.262 - 12451.840: 93.7819% ( 25) 00:15:14.154 12451.840 - 12511.418: 93.9732% ( 24) 00:15:14.154 12511.418 - 12570.996: 94.1566% ( 23) 00:15:14.155 12570.996 - 12630.575: 94.2921% ( 17) 00:15:14.155 12630.575 - 12690.153: 94.4595% ( 21) 00:15:14.155 12690.153 - 12749.731: 94.5711% ( 14) 00:15:14.155 12749.731 - 12809.309: 94.6429% ( 9) 00:15:14.155 12809.309 - 12868.887: 94.7226% ( 10) 00:15:14.155 12868.887 - 12928.465: 94.7784% ( 7) 00:15:14.155 12928.465 - 12988.044: 94.8342% ( 7) 00:15:14.155 12988.044 - 13047.622: 94.9059% ( 9) 00:15:14.155 13047.622 - 13107.200: 94.9777% ( 9) 00:15:14.155 13107.200 - 13166.778: 95.0494% ( 9) 00:15:14.155 13166.778 - 13226.356: 95.1291% ( 10) 00:15:14.155 13226.356 - 13285.935: 95.1849% ( 7) 00:15:14.155 13285.935 - 13345.513: 95.2408% ( 7) 00:15:14.155 13345.513 - 13405.091: 95.3364% ( 12) 00:15:14.155 13405.091 - 13464.669: 95.4560% ( 15) 00:15:14.155 13464.669 - 13524.247: 95.5756% ( 15) 00:15:14.155 13524.247 - 13583.825: 95.7270% ( 19) 00:15:14.155 13583.825 - 13643.404: 95.8865% ( 20) 00:15:14.155 13643.404 - 13702.982: 96.0698% ( 23) 00:15:14.155 13702.982 - 13762.560: 96.2612% ( 24) 00:15:14.155 13762.560 - 13822.138: 96.4365% ( 22) 00:15:14.155 13822.138 - 13881.716: 96.5960% ( 20) 00:15:14.155 13881.716 - 13941.295: 96.7554% ( 20) 00:15:14.155 13941.295 - 14000.873: 96.9308% ( 22) 00:15:14.155 14000.873 - 14060.451: 97.0663% ( 17) 00:15:14.155 14060.451 - 14120.029: 97.2098% ( 18) 00:15:14.155 14120.029 - 14179.607: 97.3214% ( 14) 00:15:14.155 14179.607 - 14239.185: 97.4490% ( 16) 00:15:14.155 14239.185 - 14298.764: 97.5845% ( 17) 00:15:14.155 14298.764 - 14358.342: 97.7439% ( 20) 00:15:14.155 14358.342 - 14417.920: 97.8954% ( 19) 00:15:14.155 14417.920 - 14477.498: 98.0628% ( 21) 00:15:14.155 14477.498 - 14537.076: 98.1744% ( 14) 00:15:14.155 14537.076 - 14596.655: 98.2860% ( 14) 00:15:14.155 14596.655 - 14656.233: 98.3817% ( 12) 00:15:14.155 14656.233 - 14715.811: 98.4694% ( 11) 00:15:14.155 14715.811 - 14775.389: 98.5411% ( 9) 00:15:14.155 14775.389 - 14834.967: 98.5890% ( 6) 00:15:14.155 14834.967 - 14894.545: 98.6448% ( 7) 00:15:14.155 14894.545 - 14954.124: 98.6926% ( 6) 00:15:14.155 14954.124 - 15013.702: 98.7404% ( 6) 00:15:14.155 15013.702 - 15073.280: 98.7803% ( 5) 00:15:14.155 15073.280 - 15132.858: 98.8361% ( 7) 00:15:14.155 15132.858 - 15192.436: 98.8760% ( 5) 00:15:14.155 15192.436 - 15252.015: 98.9158% ( 5) 00:15:14.155 15252.015 - 15371.171: 98.9716% ( 7) 00:15:14.155 15371.171 - 15490.327: 98.9796% ( 1) 00:15:14.155 30384.873 - 30504.029: 99.0035% ( 3) 00:15:14.155 30504.029 - 30742.342: 99.0593% ( 7) 00:15:14.155 30742.342 - 30980.655: 99.1151% ( 7) 00:15:14.155 30980.655 - 31218.967: 99.1709% ( 7) 00:15:14.155 31218.967 - 31457.280: 99.2267% ( 7) 00:15:14.155 31457.280 - 31695.593: 99.2746% ( 6) 00:15:14.155 31695.593 - 31933.905: 99.3304% ( 7) 00:15:14.155 31933.905 - 32172.218: 99.3862% ( 7) 00:15:14.155 32172.218 - 32410.531: 99.4420% ( 7) 00:15:14.155 32410.531 - 32648.844: 99.4818% ( 5) 00:15:14.155 32648.844 - 32887.156: 99.4898% ( 1) 00:15:14.155 38368.349 - 38606.662: 99.5057% ( 2) 00:15:14.155 38606.662 - 38844.975: 99.5615% ( 7) 00:15:14.155 38844.975 - 39083.287: 99.6173% ( 7) 00:15:14.155 39083.287 - 39321.600: 99.6732% ( 7) 00:15:14.155 39321.600 - 39559.913: 99.7290% ( 7) 00:15:14.155 39559.913 - 39798.225: 99.7927% ( 8) 00:15:14.155 39798.225 - 40036.538: 99.8485% ( 7) 00:15:14.155 40036.538 - 40274.851: 99.9043% ( 7) 00:15:14.155 40274.851 - 40513.164: 99.9601% ( 7) 00:15:14.155 40513.164 - 40751.476: 100.0000% ( 5) 00:15:14.155 00:15:14.155 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:15:14.155 ============================================================================== 00:15:14.155 Range in us Cumulative IO count 00:15:14.155 7983.476 - 8043.055: 0.0238% ( 3) 00:15:14.155 8043.055 - 8102.633: 0.0635% ( 5) 00:15:14.155 8102.633 - 8162.211: 0.1428% ( 10) 00:15:14.155 8162.211 - 8221.789: 0.2776% ( 17) 00:15:14.155 8221.789 - 8281.367: 0.4124% ( 17) 00:15:14.155 8281.367 - 8340.945: 0.6504% ( 30) 00:15:14.155 8340.945 - 8400.524: 0.8725% ( 28) 00:15:14.155 8400.524 - 8460.102: 1.1342% ( 33) 00:15:14.155 8460.102 - 8519.680: 1.4753% ( 43) 00:15:14.155 8519.680 - 8579.258: 1.9194% ( 56) 00:15:14.155 8579.258 - 8638.836: 2.5143% ( 75) 00:15:14.155 8638.836 - 8698.415: 3.4185% ( 114) 00:15:14.155 8698.415 - 8757.993: 4.6240% ( 152) 00:15:14.155 8757.993 - 8817.571: 6.3135% ( 213) 00:15:14.155 8817.571 - 8877.149: 8.1615% ( 233) 00:15:14.155 8877.149 - 8936.727: 10.4061% ( 283) 00:15:14.155 8936.727 - 8996.305: 12.9442% ( 320) 00:15:14.155 8996.305 - 9055.884: 15.9264% ( 376) 00:15:14.155 9055.884 - 9115.462: 19.1307% ( 404) 00:15:14.155 9115.462 - 9175.040: 22.4540% ( 419) 00:15:14.155 9175.040 - 9234.618: 25.9518% ( 441) 00:15:14.155 9234.618 - 9294.196: 29.4020% ( 435) 00:15:14.155 9294.196 - 9353.775: 32.9553% ( 448) 00:15:14.155 9353.775 - 9413.353: 36.4610% ( 442) 00:15:14.155 9413.353 - 9472.931: 40.1332% ( 463) 00:15:14.155 9472.931 - 9532.509: 43.7738% ( 459) 00:15:14.155 9532.509 - 9592.087: 47.5333% ( 474) 00:15:14.155 9592.087 - 9651.665: 51.1501% ( 456) 00:15:14.155 9651.665 - 9711.244: 54.7430% ( 453) 00:15:14.155 9711.244 - 9770.822: 58.1932% ( 435) 00:15:14.155 9770.822 - 9830.400: 61.3896% ( 403) 00:15:14.155 9830.400 - 9889.978: 64.5225% ( 395) 00:15:14.155 9889.978 - 9949.556: 67.0765% ( 322) 00:15:14.155 9949.556 - 10009.135: 69.4004% ( 293) 00:15:14.155 10009.135 - 10068.713: 71.4943% ( 264) 00:15:14.155 10068.713 - 10128.291: 73.3344% ( 232) 00:15:14.155 10128.291 - 10187.869: 74.8493% ( 191) 00:15:14.155 10187.869 - 10247.447: 76.2452% ( 176) 00:15:14.155 10247.447 - 10307.025: 77.4746% ( 155) 00:15:14.155 10307.025 - 10366.604: 78.5374% ( 134) 00:15:14.155 10366.604 - 10426.182: 79.6240% ( 137) 00:15:14.155 10426.182 - 10485.760: 80.5520% ( 117) 00:15:14.155 10485.760 - 10545.338: 81.3928% ( 106) 00:15:14.155 10545.338 - 10604.916: 82.1542% ( 96) 00:15:14.155 10604.916 - 10664.495: 82.8522% ( 88) 00:15:14.155 10664.495 - 10724.073: 83.5105% ( 83) 00:15:14.155 10724.073 - 10783.651: 84.1926% ( 86) 00:15:14.155 10783.651 - 10843.229: 84.8112% ( 78) 00:15:14.155 10843.229 - 10902.807: 85.3982% ( 74) 00:15:14.155 10902.807 - 10962.385: 85.8582% ( 58) 00:15:14.155 10962.385 - 11021.964: 86.3420% ( 61) 00:15:14.155 11021.964 - 11081.542: 86.8100% ( 59) 00:15:14.155 11081.542 - 11141.120: 87.2383% ( 54) 00:15:14.155 11141.120 - 11200.698: 87.6983% ( 58) 00:15:14.155 11200.698 - 11260.276: 88.1266% ( 54) 00:15:14.155 11260.276 - 11319.855: 88.5390% ( 52) 00:15:14.155 11319.855 - 11379.433: 88.8801% ( 43) 00:15:14.156 11379.433 - 11439.011: 89.2132% ( 42) 00:15:14.156 11439.011 - 11498.589: 89.5067% ( 37) 00:15:14.156 11498.589 - 11558.167: 89.8160% ( 39) 00:15:14.156 11558.167 - 11617.745: 90.1015% ( 36) 00:15:14.156 11617.745 - 11677.324: 90.4346% ( 42) 00:15:14.156 11677.324 - 11736.902: 90.7043% ( 34) 00:15:14.156 11736.902 - 11796.480: 90.9819% ( 35) 00:15:14.156 11796.480 - 11856.058: 91.2437% ( 33) 00:15:14.156 11856.058 - 11915.636: 91.4657% ( 28) 00:15:14.156 11915.636 - 11975.215: 91.6878% ( 28) 00:15:14.156 11975.215 - 12034.793: 91.9099% ( 28) 00:15:14.156 12034.793 - 12094.371: 92.1240% ( 27) 00:15:14.156 12094.371 - 12153.949: 92.3461% ( 28) 00:15:14.156 12153.949 - 12213.527: 92.5841% ( 30) 00:15:14.156 12213.527 - 12273.105: 92.8220% ( 30) 00:15:14.156 12273.105 - 12332.684: 93.0679% ( 31) 00:15:14.156 12332.684 - 12392.262: 93.3296% ( 33) 00:15:14.156 12392.262 - 12451.840: 93.5200% ( 24) 00:15:14.156 12451.840 - 12511.418: 93.7262% ( 26) 00:15:14.156 12511.418 - 12570.996: 93.8769% ( 19) 00:15:14.156 12570.996 - 12630.575: 94.0673% ( 24) 00:15:14.156 12630.575 - 12690.153: 94.2259% ( 20) 00:15:14.156 12690.153 - 12749.731: 94.3924% ( 21) 00:15:14.156 12749.731 - 12809.309: 94.5194% ( 16) 00:15:14.156 12809.309 - 12868.887: 94.6542% ( 17) 00:15:14.156 12868.887 - 12928.465: 94.7811% ( 16) 00:15:14.156 12928.465 - 12988.044: 94.9159% ( 17) 00:15:14.156 12988.044 - 13047.622: 95.0349% ( 15) 00:15:14.156 13047.622 - 13107.200: 95.1618% ( 16) 00:15:14.156 13107.200 - 13166.778: 95.2728% ( 14) 00:15:14.156 13166.778 - 13226.356: 95.3601% ( 11) 00:15:14.156 13226.356 - 13285.935: 95.4553% ( 12) 00:15:14.156 13285.935 - 13345.513: 95.5504% ( 12) 00:15:14.156 13345.513 - 13405.091: 95.6060% ( 7) 00:15:14.156 13405.091 - 13464.669: 95.6615% ( 7) 00:15:14.156 13464.669 - 13524.247: 95.7646% ( 13) 00:15:14.156 13524.247 - 13583.825: 95.8756% ( 14) 00:15:14.156 13583.825 - 13643.404: 96.0105% ( 17) 00:15:14.156 13643.404 - 13702.982: 96.1453% ( 17) 00:15:14.156 13702.982 - 13762.560: 96.2563% ( 14) 00:15:14.156 13762.560 - 13822.138: 96.3832% ( 16) 00:15:14.156 13822.138 - 13881.716: 96.5022% ( 15) 00:15:14.156 13881.716 - 13941.295: 96.6133% ( 14) 00:15:14.156 13941.295 - 14000.873: 96.7322% ( 15) 00:15:14.156 14000.873 - 14060.451: 96.8671% ( 17) 00:15:14.156 14060.451 - 14120.029: 97.0178% ( 19) 00:15:14.156 14120.029 - 14179.607: 97.1447% ( 16) 00:15:14.156 14179.607 - 14239.185: 97.2557% ( 14) 00:15:14.156 14239.185 - 14298.764: 97.3826% ( 16) 00:15:14.156 14298.764 - 14358.342: 97.5095% ( 16) 00:15:14.156 14358.342 - 14417.920: 97.6681% ( 20) 00:15:14.156 14417.920 - 14477.498: 97.7951% ( 16) 00:15:14.156 14477.498 - 14537.076: 97.9457% ( 19) 00:15:14.156 14537.076 - 14596.655: 98.0647% ( 15) 00:15:14.156 14596.655 - 14656.233: 98.1599% ( 12) 00:15:14.156 14656.233 - 14715.811: 98.2313% ( 9) 00:15:14.156 14715.811 - 14775.389: 98.2868% ( 7) 00:15:14.156 14775.389 - 14834.967: 98.3503% ( 8) 00:15:14.156 14834.967 - 14894.545: 98.4137% ( 8) 00:15:14.156 14894.545 - 14954.124: 98.4930% ( 10) 00:15:14.156 14954.124 - 15013.702: 98.5565% ( 8) 00:15:14.156 15013.702 - 15073.280: 98.6199% ( 8) 00:15:14.156 15073.280 - 15132.858: 98.6992% ( 10) 00:15:14.156 15132.858 - 15192.436: 98.7627% ( 8) 00:15:14.156 15192.436 - 15252.015: 98.8103% ( 6) 00:15:14.156 15252.015 - 15371.171: 98.9055% ( 12) 00:15:14.156 15371.171 - 15490.327: 98.9848% ( 10) 00:15:14.156 22639.709 - 22758.865: 99.0006% ( 2) 00:15:14.156 22758.865 - 22878.022: 99.0324% ( 4) 00:15:14.156 22878.022 - 22997.178: 99.0562% ( 3) 00:15:14.156 22997.178 - 23116.335: 99.0879% ( 4) 00:15:14.156 23116.335 - 23235.491: 99.1196% ( 4) 00:15:14.156 23235.491 - 23354.647: 99.1434% ( 3) 00:15:14.156 23354.647 - 23473.804: 99.1672% ( 3) 00:15:14.156 23473.804 - 23592.960: 99.1989% ( 4) 00:15:14.156 23592.960 - 23712.116: 99.2227% ( 3) 00:15:14.156 23712.116 - 23831.273: 99.2544% ( 4) 00:15:14.156 23831.273 - 23950.429: 99.2782% ( 3) 00:15:14.156 23950.429 - 24069.585: 99.3100% ( 4) 00:15:14.156 24069.585 - 24188.742: 99.3338% ( 3) 00:15:14.156 24188.742 - 24307.898: 99.3655% ( 4) 00:15:14.156 24307.898 - 24427.055: 99.3972% ( 4) 00:15:14.156 24427.055 - 24546.211: 99.4210% ( 3) 00:15:14.156 24546.211 - 24665.367: 99.4448% ( 3) 00:15:14.156 24665.367 - 24784.524: 99.4765% ( 4) 00:15:14.156 24784.524 - 24903.680: 99.4924% ( 2) 00:15:14.156 29789.091 - 29908.247: 99.5162% ( 3) 00:15:14.156 29908.247 - 30027.404: 99.5479% ( 4) 00:15:14.156 30027.404 - 30146.560: 99.5717% ( 3) 00:15:14.156 30146.560 - 30265.716: 99.5955% ( 3) 00:15:14.156 30265.716 - 30384.873: 99.6193% ( 3) 00:15:14.156 30384.873 - 30504.029: 99.6431% ( 3) 00:15:14.156 30504.029 - 30742.342: 99.6986% ( 7) 00:15:14.156 30742.342 - 30980.655: 99.7462% ( 6) 00:15:14.156 30980.655 - 31218.967: 99.8017% ( 7) 00:15:14.156 31218.967 - 31457.280: 99.8572% ( 7) 00:15:14.156 31457.280 - 31695.593: 99.9128% ( 7) 00:15:14.156 31695.593 - 31933.905: 99.9683% ( 7) 00:15:14.156 31933.905 - 32172.218: 100.0000% ( 4) 00:15:14.156 00:15:14.156 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:15:14.156 ============================================================================== 00:15:14.156 Range in us Cumulative IO count 00:15:14.156 7983.476 - 8043.055: 0.0079% ( 1) 00:15:14.156 8043.055 - 8102.633: 0.0555% ( 6) 00:15:14.156 8102.633 - 8162.211: 0.1190% ( 8) 00:15:14.156 8162.211 - 8221.789: 0.2379% ( 15) 00:15:14.156 8221.789 - 8281.367: 0.3886% ( 19) 00:15:14.156 8281.367 - 8340.945: 0.5869% ( 25) 00:15:14.156 8340.945 - 8400.524: 0.8249% ( 30) 00:15:14.156 8400.524 - 8460.102: 1.0549% ( 29) 00:15:14.156 8460.102 - 8519.680: 1.4197% ( 46) 00:15:14.156 8519.680 - 8579.258: 1.8163% ( 50) 00:15:14.156 8579.258 - 8638.836: 2.3953% ( 73) 00:15:14.156 8638.836 - 8698.415: 3.2440% ( 107) 00:15:14.156 8698.415 - 8757.993: 4.4654% ( 154) 00:15:14.156 8757.993 - 8817.571: 6.0041% ( 194) 00:15:14.156 8817.571 - 8877.149: 7.9791% ( 249) 00:15:14.156 8877.149 - 8936.727: 10.2633% ( 288) 00:15:14.157 8936.727 - 8996.305: 12.8252% ( 323) 00:15:14.157 8996.305 - 9055.884: 15.8154% ( 377) 00:15:14.157 9055.884 - 9115.462: 18.9721% ( 398) 00:15:14.157 9115.462 - 9175.040: 22.1923% ( 406) 00:15:14.157 9175.040 - 9234.618: 25.4124% ( 406) 00:15:14.157 9234.618 - 9294.196: 28.7754% ( 424) 00:15:14.157 9294.196 - 9353.775: 32.1780% ( 429) 00:15:14.157 9353.775 - 9413.353: 35.8423% ( 462) 00:15:14.157 9413.353 - 9472.931: 39.4670% ( 457) 00:15:14.157 9472.931 - 9532.509: 43.2345% ( 475) 00:15:14.157 9532.509 - 9592.087: 47.0733% ( 484) 00:15:14.157 9592.087 - 9651.665: 50.8645% ( 478) 00:15:14.157 9651.665 - 9711.244: 54.5368% ( 463) 00:15:14.157 9711.244 - 9770.822: 58.0584% ( 444) 00:15:14.157 9770.822 - 9830.400: 61.3975% ( 421) 00:15:14.157 9830.400 - 9889.978: 64.4273% ( 382) 00:15:14.157 9889.978 - 9949.556: 67.0685% ( 333) 00:15:14.157 9949.556 - 10009.135: 69.4638% ( 302) 00:15:14.157 10009.135 - 10068.713: 71.5339% ( 261) 00:15:14.157 10068.713 - 10128.291: 73.4137% ( 237) 00:15:14.157 10128.291 - 10187.869: 75.0872% ( 211) 00:15:14.157 10187.869 - 10247.447: 76.4673% ( 174) 00:15:14.157 10247.447 - 10307.025: 77.7205% ( 158) 00:15:14.157 10307.025 - 10366.604: 78.9102% ( 150) 00:15:14.157 10366.604 - 10426.182: 79.9651% ( 133) 00:15:14.157 10426.182 - 10485.760: 80.9327% ( 122) 00:15:14.157 10485.760 - 10545.338: 81.7814% ( 107) 00:15:14.157 10545.338 - 10604.916: 82.5270% ( 94) 00:15:14.157 10604.916 - 10664.495: 83.1694% ( 81) 00:15:14.157 10664.495 - 10724.073: 83.7563% ( 74) 00:15:14.157 10724.073 - 10783.651: 84.3591% ( 76) 00:15:14.157 10783.651 - 10843.229: 84.9540% ( 75) 00:15:14.157 10843.229 - 10902.807: 85.5251% ( 72) 00:15:14.157 10902.807 - 10962.385: 86.0565% ( 67) 00:15:14.157 10962.385 - 11021.964: 86.5641% ( 64) 00:15:14.157 11021.964 - 11081.542: 87.0320% ( 59) 00:15:14.157 11081.542 - 11141.120: 87.5317% ( 63) 00:15:14.157 11141.120 - 11200.698: 87.9600% ( 54) 00:15:14.157 11200.698 - 11260.276: 88.4042% ( 56) 00:15:14.157 11260.276 - 11319.855: 88.8166% ( 52) 00:15:14.157 11319.855 - 11379.433: 89.2053% ( 49) 00:15:14.157 11379.433 - 11439.011: 89.5780% ( 47) 00:15:14.157 11439.011 - 11498.589: 89.8953% ( 40) 00:15:14.157 11498.589 - 11558.167: 90.1967% ( 38) 00:15:14.157 11558.167 - 11617.745: 90.4505% ( 32) 00:15:14.157 11617.745 - 11677.324: 90.7202% ( 34) 00:15:14.157 11677.324 - 11736.902: 90.9581% ( 30) 00:15:14.157 11736.902 - 11796.480: 91.2119% ( 32) 00:15:14.157 11796.480 - 11856.058: 91.4499% ( 30) 00:15:14.157 11856.058 - 11915.636: 91.7116% ( 33) 00:15:14.157 11915.636 - 11975.215: 91.9971% ( 36) 00:15:14.157 11975.215 - 12034.793: 92.2351% ( 30) 00:15:14.157 12034.793 - 12094.371: 92.4730% ( 30) 00:15:14.157 12094.371 - 12153.949: 92.6951% ( 28) 00:15:14.157 12153.949 - 12213.527: 92.8855% ( 24) 00:15:14.157 12213.527 - 12273.105: 93.0520% ( 21) 00:15:14.157 12273.105 - 12332.684: 93.1710% ( 15) 00:15:14.157 12332.684 - 12392.262: 93.3058% ( 17) 00:15:14.157 12392.262 - 12451.840: 93.4327% ( 16) 00:15:14.157 12451.840 - 12511.418: 93.5834% ( 19) 00:15:14.157 12511.418 - 12570.996: 93.7262% ( 18) 00:15:14.157 12570.996 - 12630.575: 93.8610% ( 17) 00:15:14.157 12630.575 - 12690.153: 94.0197% ( 20) 00:15:14.157 12690.153 - 12749.731: 94.1466% ( 16) 00:15:14.157 12749.731 - 12809.309: 94.3052% ( 20) 00:15:14.157 12809.309 - 12868.887: 94.4321% ( 16) 00:15:14.157 12868.887 - 12928.465: 94.5749% ( 18) 00:15:14.157 12928.465 - 12988.044: 94.6780% ( 13) 00:15:14.157 12988.044 - 13047.622: 94.7811% ( 13) 00:15:14.157 13047.622 - 13107.200: 94.8921% ( 14) 00:15:14.157 13107.200 - 13166.778: 95.0032% ( 14) 00:15:14.157 13166.778 - 13226.356: 95.1221% ( 15) 00:15:14.157 13226.356 - 13285.935: 95.2015% ( 10) 00:15:14.157 13285.935 - 13345.513: 95.2887% ( 11) 00:15:14.157 13345.513 - 13405.091: 95.3601% ( 9) 00:15:14.157 13405.091 - 13464.669: 95.4473% ( 11) 00:15:14.157 13464.669 - 13524.247: 95.5266% ( 10) 00:15:14.157 13524.247 - 13583.825: 95.6694% ( 18) 00:15:14.157 13583.825 - 13643.404: 95.8122% ( 18) 00:15:14.157 13643.404 - 13702.982: 95.9946% ( 23) 00:15:14.157 13702.982 - 13762.560: 96.1453% ( 19) 00:15:14.157 13762.560 - 13822.138: 96.2405% ( 12) 00:15:14.157 13822.138 - 13881.716: 96.3674% ( 16) 00:15:14.157 13881.716 - 13941.295: 96.4784% ( 14) 00:15:14.157 13941.295 - 14000.873: 96.6053% ( 16) 00:15:14.157 14000.873 - 14060.451: 96.7402% ( 17) 00:15:14.157 14060.451 - 14120.029: 96.8829% ( 18) 00:15:14.157 14120.029 - 14179.607: 97.0416% ( 20) 00:15:14.157 14179.607 - 14239.185: 97.1685% ( 16) 00:15:14.157 14239.185 - 14298.764: 97.3192% ( 19) 00:15:14.157 14298.764 - 14358.342: 97.4381% ( 15) 00:15:14.157 14358.342 - 14417.920: 97.5888% ( 19) 00:15:14.157 14417.920 - 14477.498: 97.7475% ( 20) 00:15:14.157 14477.498 - 14537.076: 97.8982% ( 19) 00:15:14.157 14537.076 - 14596.655: 98.0409% ( 18) 00:15:14.157 14596.655 - 14656.233: 98.1758% ( 17) 00:15:14.157 14656.233 - 14715.811: 98.2709% ( 12) 00:15:14.157 14715.811 - 14775.389: 98.3582% ( 11) 00:15:14.157 14775.389 - 14834.967: 98.4216% ( 8) 00:15:14.157 14834.967 - 14894.545: 98.4772% ( 7) 00:15:14.157 14894.545 - 14954.124: 98.5485% ( 9) 00:15:14.157 14954.124 - 15013.702: 98.6199% ( 9) 00:15:14.157 15013.702 - 15073.280: 98.6754% ( 7) 00:15:14.157 15073.280 - 15132.858: 98.7310% ( 7) 00:15:14.157 15132.858 - 15192.436: 98.7786% ( 6) 00:15:14.157 15192.436 - 15252.015: 98.8261% ( 6) 00:15:14.157 15252.015 - 15371.171: 98.9134% ( 11) 00:15:14.157 15371.171 - 15490.327: 98.9610% ( 6) 00:15:14.157 15490.327 - 15609.484: 98.9848% ( 3) 00:15:14.157 20614.051 - 20733.207: 99.0086% ( 3) 00:15:14.157 20733.207 - 20852.364: 99.0482% ( 5) 00:15:14.157 20852.364 - 20971.520: 99.0720% ( 3) 00:15:14.157 20971.520 - 21090.676: 99.1037% ( 4) 00:15:14.157 21090.676 - 21209.833: 99.1196% ( 2) 00:15:14.157 21209.833 - 21328.989: 99.1513% ( 4) 00:15:14.157 21328.989 - 21448.145: 99.1751% ( 3) 00:15:14.157 21448.145 - 21567.302: 99.2069% ( 4) 00:15:14.158 21567.302 - 21686.458: 99.2306% ( 3) 00:15:14.158 21686.458 - 21805.615: 99.2624% ( 4) 00:15:14.158 21805.615 - 21924.771: 99.2862% ( 3) 00:15:14.158 21924.771 - 22043.927: 99.3179% ( 4) 00:15:14.158 22043.927 - 22163.084: 99.3417% ( 3) 00:15:14.158 22163.084 - 22282.240: 99.3734% ( 4) 00:15:14.158 22282.240 - 22401.396: 99.3972% ( 3) 00:15:14.158 22401.396 - 22520.553: 99.4289% ( 4) 00:15:14.158 22520.553 - 22639.709: 99.4527% ( 3) 00:15:14.158 22639.709 - 22758.865: 99.4765% ( 3) 00:15:14.158 22758.865 - 22878.022: 99.4924% ( 2) 00:15:14.158 27763.433 - 27882.589: 99.5162% ( 3) 00:15:14.158 27882.589 - 28001.745: 99.5400% ( 3) 00:15:14.158 28001.745 - 28120.902: 99.5638% ( 3) 00:15:14.158 28120.902 - 28240.058: 99.5876% ( 3) 00:15:14.158 28240.058 - 28359.215: 99.6114% ( 3) 00:15:14.158 28359.215 - 28478.371: 99.6431% ( 4) 00:15:14.158 28478.371 - 28597.527: 99.6589% ( 2) 00:15:14.158 28597.527 - 28716.684: 99.6827% ( 3) 00:15:14.158 28716.684 - 28835.840: 99.7065% ( 3) 00:15:14.158 28835.840 - 28954.996: 99.7383% ( 4) 00:15:14.158 28954.996 - 29074.153: 99.7621% ( 3) 00:15:14.158 29074.153 - 29193.309: 99.7938% ( 4) 00:15:14.158 29193.309 - 29312.465: 99.8255% ( 4) 00:15:14.158 29312.465 - 29431.622: 99.8493% ( 3) 00:15:14.158 29431.622 - 29550.778: 99.8810% ( 4) 00:15:14.158 29550.778 - 29669.935: 99.9048% ( 3) 00:15:14.158 29669.935 - 29789.091: 99.9365% ( 4) 00:15:14.158 29789.091 - 29908.247: 99.9683% ( 4) 00:15:14.158 29908.247 - 30027.404: 100.0000% ( 4) 00:15:14.158 00:15:14.158 13:09:20 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:15:15.536 Initializing NVMe Controllers 00:15:15.536 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:15.536 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:15.536 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:15.536 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:15.536 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:15.536 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:15.536 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:15.536 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:15.536 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:15.536 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:15.536 Initialization complete. Launching workers. 00:15:15.536 ======================================================== 00:15:15.536 Latency(us) 00:15:15.536 Device Information : IOPS MiB/s Average min max 00:15:15.536 PCIE (0000:00:10.0) NSID 1 from core 0: 9658.42 113.18 13279.64 9339.96 45079.53 00:15:15.536 PCIE (0000:00:11.0) NSID 1 from core 0: 9658.42 113.18 13251.29 9436.12 43199.96 00:15:15.536 PCIE (0000:00:13.0) NSID 1 from core 0: 9658.42 113.18 13220.73 9578.53 41601.49 00:15:15.536 PCIE (0000:00:12.0) NSID 1 from core 0: 9658.42 113.18 13190.10 9618.63 39743.38 00:15:15.536 PCIE (0000:00:12.0) NSID 2 from core 0: 9658.42 113.18 13159.40 9596.15 38007.89 00:15:15.536 PCIE (0000:00:12.0) NSID 3 from core 0: 9658.42 113.18 13129.05 9377.99 36208.54 00:15:15.536 ======================================================== 00:15:15.536 Total : 57950.50 679.11 13205.03 9339.96 45079.53 00:15:15.536 00:15:15.536 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:15:15.536 ================================================================================= 00:15:15.536 1.00000% : 9711.244us 00:15:15.536 10.00000% : 10366.604us 00:15:15.536 25.00000% : 10843.229us 00:15:15.536 50.00000% : 11677.324us 00:15:15.536 75.00000% : 14060.451us 00:15:15.536 90.00000% : 19184.175us 00:15:15.536 95.00000% : 20375.738us 00:15:15.536 98.00000% : 21448.145us 00:15:15.536 99.00000% : 31695.593us 00:15:15.536 99.50000% : 42419.665us 00:15:15.536 99.90000% : 44802.793us 00:15:15.536 99.99000% : 45279.418us 00:15:15.536 99.99900% : 45279.418us 00:15:15.536 99.99990% : 45279.418us 00:15:15.536 99.99999% : 45279.418us 00:15:15.536 00:15:15.536 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:15:15.536 ================================================================================= 00:15:15.536 1.00000% : 9830.400us 00:15:15.536 10.00000% : 10366.604us 00:15:15.536 25.00000% : 10843.229us 00:15:15.536 50.00000% : 11677.324us 00:15:15.536 75.00000% : 14060.451us 00:15:15.536 90.00000% : 19184.175us 00:15:15.536 95.00000% : 20256.582us 00:15:15.536 98.00000% : 21209.833us 00:15:15.536 99.00000% : 29789.091us 00:15:15.536 99.50000% : 40989.789us 00:15:15.536 99.90000% : 42896.291us 00:15:15.536 99.99000% : 43372.916us 00:15:15.536 99.99900% : 43372.916us 00:15:15.536 99.99990% : 43372.916us 00:15:15.536 99.99999% : 43372.916us 00:15:15.536 00:15:15.536 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:15:15.536 ================================================================================= 00:15:15.536 1.00000% : 9830.400us 00:15:15.536 10.00000% : 10366.604us 00:15:15.536 25.00000% : 10843.229us 00:15:15.536 50.00000% : 11677.324us 00:15:15.536 75.00000% : 14120.029us 00:15:15.536 90.00000% : 19184.175us 00:15:15.536 95.00000% : 20137.425us 00:15:15.536 98.00000% : 21209.833us 00:15:15.536 99.00000% : 28359.215us 00:15:15.536 99.50000% : 39321.600us 00:15:15.536 99.90000% : 41228.102us 00:15:15.536 99.99000% : 41704.727us 00:15:15.536 99.99900% : 41704.727us 00:15:15.536 99.99990% : 41704.727us 00:15:15.536 99.99999% : 41704.727us 00:15:15.536 00:15:15.536 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:15:15.536 ================================================================================= 00:15:15.536 1.00000% : 9830.400us 00:15:15.536 10.00000% : 10366.604us 00:15:15.536 25.00000% : 10902.807us 00:15:15.536 50.00000% : 11617.745us 00:15:15.536 75.00000% : 14060.451us 00:15:15.536 90.00000% : 19065.018us 00:15:15.536 95.00000% : 20137.425us 00:15:15.536 98.00000% : 21209.833us 00:15:15.536 99.00000% : 26452.713us 00:15:15.536 99.50000% : 37415.098us 00:15:15.536 99.90000% : 39321.600us 00:15:15.536 99.99000% : 39798.225us 00:15:15.536 99.99900% : 39798.225us 00:15:15.536 99.99990% : 39798.225us 00:15:15.536 99.99999% : 39798.225us 00:15:15.536 00:15:15.536 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:15:15.536 ================================================================================= 00:15:15.536 1.00000% : 9889.978us 00:15:15.536 10.00000% : 10426.182us 00:15:15.536 25.00000% : 10902.807us 00:15:15.536 50.00000% : 11617.745us 00:15:15.536 75.00000% : 14000.873us 00:15:15.536 90.00000% : 18945.862us 00:15:15.536 95.00000% : 20137.425us 00:15:15.536 98.00000% : 21209.833us 00:15:15.536 99.00000% : 24546.211us 00:15:15.536 99.50000% : 35746.909us 00:15:15.536 99.90000% : 37653.411us 00:15:15.536 99.99000% : 38130.036us 00:15:15.536 99.99900% : 38130.036us 00:15:15.536 99.99990% : 38130.036us 00:15:15.536 99.99999% : 38130.036us 00:15:15.536 00:15:15.536 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:15:15.536 ================================================================================= 00:15:15.536 1.00000% : 9889.978us 00:15:15.536 10.00000% : 10426.182us 00:15:15.536 25.00000% : 10902.807us 00:15:15.536 50.00000% : 11677.324us 00:15:15.536 75.00000% : 13941.295us 00:15:15.536 90.00000% : 19065.018us 00:15:15.536 95.00000% : 20137.425us 00:15:15.537 98.00000% : 21090.676us 00:15:15.537 99.00000% : 22878.022us 00:15:15.537 99.50000% : 33840.407us 00:15:15.537 99.90000% : 35746.909us 00:15:15.537 99.99000% : 36223.535us 00:15:15.537 99.99900% : 36223.535us 00:15:15.537 99.99990% : 36223.535us 00:15:15.537 99.99999% : 36223.535us 00:15:15.537 00:15:15.537 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:15:15.537 ============================================================================== 00:15:15.537 Range in us Cumulative IO count 00:15:15.537 9294.196 - 9353.775: 0.0207% ( 2) 00:15:15.537 9353.775 - 9413.353: 0.0621% ( 4) 00:15:15.537 9413.353 - 9472.931: 0.1449% ( 8) 00:15:15.537 9472.931 - 9532.509: 0.2897% ( 14) 00:15:15.537 9532.509 - 9592.087: 0.4863% ( 19) 00:15:15.537 9592.087 - 9651.665: 0.7968% ( 30) 00:15:15.537 9651.665 - 9711.244: 1.2210% ( 41) 00:15:15.537 9711.244 - 9770.822: 1.6349% ( 40) 00:15:15.537 9770.822 - 9830.400: 2.1730% ( 52) 00:15:15.537 9830.400 - 9889.978: 2.9180% ( 72) 00:15:15.537 9889.978 - 9949.556: 3.5700% ( 63) 00:15:15.537 9949.556 - 10009.135: 4.4288% ( 83) 00:15:15.537 10009.135 - 10068.713: 5.2980% ( 84) 00:15:15.537 10068.713 - 10128.291: 6.3224% ( 99) 00:15:15.537 10128.291 - 10187.869: 7.3986% ( 104) 00:15:15.537 10187.869 - 10247.447: 8.5368% ( 110) 00:15:15.537 10247.447 - 10307.025: 9.7372% ( 116) 00:15:15.537 10307.025 - 10366.604: 10.9685% ( 119) 00:15:15.537 10366.604 - 10426.182: 12.3034% ( 129) 00:15:15.537 10426.182 - 10485.760: 13.8866% ( 153) 00:15:15.537 10485.760 - 10545.338: 15.5629% ( 162) 00:15:15.537 10545.338 - 10604.916: 17.3013% ( 168) 00:15:15.537 10604.916 - 10664.495: 19.0294% ( 167) 00:15:15.537 10664.495 - 10724.073: 21.0472% ( 195) 00:15:15.537 10724.073 - 10783.651: 23.1995% ( 208) 00:15:15.537 10783.651 - 10843.229: 25.3208% ( 205) 00:15:15.537 10843.229 - 10902.807: 27.2661% ( 188) 00:15:15.537 10902.807 - 10962.385: 29.2736% ( 194) 00:15:15.537 10962.385 - 11021.964: 31.1879% ( 185) 00:15:15.537 11021.964 - 11081.542: 33.2264% ( 197) 00:15:15.537 11081.542 - 11141.120: 35.1304% ( 184) 00:15:15.537 11141.120 - 11200.698: 37.1792% ( 198) 00:15:15.537 11200.698 - 11260.276: 39.2384% ( 199) 00:15:15.537 11260.276 - 11319.855: 40.9872% ( 169) 00:15:15.537 11319.855 - 11379.433: 42.8084% ( 176) 00:15:15.537 11379.433 - 11439.011: 44.4433% ( 158) 00:15:15.537 11439.011 - 11498.589: 46.4300% ( 192) 00:15:15.537 11498.589 - 11558.167: 48.0132% ( 153) 00:15:15.537 11558.167 - 11617.745: 49.5344% ( 147) 00:15:15.537 11617.745 - 11677.324: 51.1175% ( 153) 00:15:15.537 11677.324 - 11736.902: 52.6697% ( 150) 00:15:15.537 11736.902 - 11796.480: 53.9632% ( 125) 00:15:15.537 11796.480 - 11856.058: 55.2256% ( 122) 00:15:15.537 11856.058 - 11915.636: 56.5501% ( 128) 00:15:15.537 11915.636 - 11975.215: 57.8435% ( 125) 00:15:15.537 11975.215 - 12034.793: 59.0646% ( 118) 00:15:15.537 12034.793 - 12094.371: 60.1407% ( 104) 00:15:15.537 12094.371 - 12153.949: 61.0927% ( 92) 00:15:15.537 12153.949 - 12213.527: 61.9930% ( 87) 00:15:15.537 12213.527 - 12273.105: 62.9243% ( 90) 00:15:15.537 12273.105 - 12332.684: 63.7210% ( 77) 00:15:15.537 12332.684 - 12392.262: 64.5385% ( 79) 00:15:15.537 12392.262 - 12451.840: 65.1697% ( 61) 00:15:15.537 12451.840 - 12511.418: 65.8733% ( 68) 00:15:15.537 12511.418 - 12570.996: 66.4321% ( 54) 00:15:15.537 12570.996 - 12630.575: 66.8978% ( 45) 00:15:15.537 12630.575 - 12690.153: 67.3324% ( 42) 00:15:15.537 12690.153 - 12749.731: 67.8601% ( 51) 00:15:15.537 12749.731 - 12809.309: 68.3568% ( 48) 00:15:15.537 12809.309 - 12868.887: 68.7500% ( 38) 00:15:15.537 12868.887 - 12928.465: 69.2053% ( 44) 00:15:15.537 12928.465 - 12988.044: 69.6296% ( 41) 00:15:15.537 12988.044 - 13047.622: 69.9917% ( 35) 00:15:15.537 13047.622 - 13107.200: 70.3849% ( 38) 00:15:15.537 13107.200 - 13166.778: 70.8609% ( 46) 00:15:15.537 13166.778 - 13226.356: 71.2438% ( 37) 00:15:15.537 13226.356 - 13285.935: 71.6267% ( 37) 00:15:15.537 13285.935 - 13345.513: 72.0199% ( 38) 00:15:15.537 13345.513 - 13405.091: 72.3303% ( 30) 00:15:15.537 13405.091 - 13464.669: 72.6407% ( 30) 00:15:15.537 13464.669 - 13524.247: 72.9408% ( 29) 00:15:15.537 13524.247 - 13583.825: 73.1685% ( 22) 00:15:15.537 13583.825 - 13643.404: 73.4582% ( 28) 00:15:15.537 13643.404 - 13702.982: 73.7065% ( 24) 00:15:15.537 13702.982 - 13762.560: 74.0066% ( 29) 00:15:15.537 13762.560 - 13822.138: 74.2446% ( 23) 00:15:15.537 13822.138 - 13881.716: 74.4930% ( 24) 00:15:15.537 13881.716 - 13941.295: 74.6689% ( 17) 00:15:15.537 13941.295 - 14000.873: 74.9379% ( 26) 00:15:15.537 14000.873 - 14060.451: 75.2070% ( 26) 00:15:15.537 14060.451 - 14120.029: 75.4450% ( 23) 00:15:15.537 14120.029 - 14179.607: 75.7036% ( 25) 00:15:15.537 14179.607 - 14239.185: 75.9313% ( 22) 00:15:15.537 14239.185 - 14298.764: 76.1589% ( 22) 00:15:15.537 14298.764 - 14358.342: 76.4073% ( 24) 00:15:15.537 14358.342 - 14417.920: 76.6142% ( 20) 00:15:15.537 14417.920 - 14477.498: 76.8626% ( 24) 00:15:15.537 14477.498 - 14537.076: 77.1316% ( 26) 00:15:15.537 14537.076 - 14596.655: 77.3386% ( 20) 00:15:15.537 14596.655 - 14656.233: 77.5145% ( 17) 00:15:15.537 14656.233 - 14715.811: 77.6387% ( 12) 00:15:15.537 14715.811 - 14775.389: 77.7835% ( 14) 00:15:15.537 14775.389 - 14834.967: 77.9180% ( 13) 00:15:15.537 14834.967 - 14894.545: 78.0836% ( 16) 00:15:15.537 14894.545 - 14954.124: 78.2181% ( 13) 00:15:15.537 14954.124 - 15013.702: 78.4044% ( 18) 00:15:15.537 15013.702 - 15073.280: 78.5700% ( 16) 00:15:15.537 15073.280 - 15132.858: 78.7666% ( 19) 00:15:15.537 15132.858 - 15192.436: 78.8907% ( 12) 00:15:15.537 15192.436 - 15252.015: 79.0666% ( 17) 00:15:15.537 15252.015 - 15371.171: 79.2943% ( 22) 00:15:15.537 15371.171 - 15490.327: 79.3978% ( 10) 00:15:15.537 15490.327 - 15609.484: 79.6047% ( 20) 00:15:15.537 15609.484 - 15728.640: 79.8324% ( 22) 00:15:15.537 15728.640 - 15847.796: 80.0083% ( 17) 00:15:15.537 15847.796 - 15966.953: 80.1635% ( 15) 00:15:15.537 15966.953 - 16086.109: 80.3084% ( 14) 00:15:15.537 16086.109 - 16205.265: 80.4636% ( 15) 00:15:15.537 16205.265 - 16324.422: 80.6084% ( 14) 00:15:15.537 16324.422 - 16443.578: 80.8050% ( 19) 00:15:15.537 16443.578 - 16562.735: 81.0844% ( 27) 00:15:15.537 16562.735 - 16681.891: 81.3017% ( 21) 00:15:15.537 16681.891 - 16801.047: 81.4880% ( 18) 00:15:15.537 16801.047 - 16920.204: 81.7053% ( 21) 00:15:15.537 16920.204 - 17039.360: 81.9433% ( 23) 00:15:15.537 17039.360 - 17158.516: 82.2537% ( 30) 00:15:15.537 17158.516 - 17277.673: 82.4917% ( 23) 00:15:15.537 17277.673 - 17396.829: 82.7608% ( 26) 00:15:15.537 17396.829 - 17515.985: 83.0505% ( 28) 00:15:15.537 17515.985 - 17635.142: 83.3402% ( 28) 00:15:15.537 17635.142 - 17754.298: 83.7645% ( 41) 00:15:15.537 17754.298 - 17873.455: 84.2301% ( 45) 00:15:15.537 17873.455 - 17992.611: 84.8406% ( 59) 00:15:15.537 17992.611 - 18111.767: 85.3580% ( 50) 00:15:15.537 18111.767 - 18230.924: 85.8858% ( 51) 00:15:15.537 18230.924 - 18350.080: 86.3618% ( 46) 00:15:15.537 18350.080 - 18469.236: 86.9412% ( 56) 00:15:15.537 18469.236 - 18588.393: 87.4897% ( 53) 00:15:15.537 18588.393 - 18707.549: 88.1519% ( 64) 00:15:15.537 18707.549 - 18826.705: 88.8349% ( 66) 00:15:15.537 18826.705 - 18945.862: 89.4247% ( 57) 00:15:15.537 18945.862 - 19065.018: 89.9317% ( 49) 00:15:15.537 19065.018 - 19184.175: 90.5215% ( 57) 00:15:15.537 19184.175 - 19303.331: 91.0286% ( 49) 00:15:15.537 19303.331 - 19422.487: 91.4839% ( 44) 00:15:15.537 19422.487 - 19541.644: 91.9805% ( 48) 00:15:15.537 19541.644 - 19660.800: 92.4979% ( 50) 00:15:15.537 19660.800 - 19779.956: 92.9118% ( 40) 00:15:15.537 19779.956 - 19899.113: 93.3775% ( 45) 00:15:15.537 19899.113 - 20018.269: 93.7603% ( 37) 00:15:15.537 20018.269 - 20137.425: 94.2363% ( 46) 00:15:15.537 20137.425 - 20256.582: 94.6296% ( 38) 00:15:15.537 20256.582 - 20375.738: 95.0642% ( 42) 00:15:15.537 20375.738 - 20494.895: 95.4781% ( 40) 00:15:15.537 20494.895 - 20614.051: 95.8506% ( 36) 00:15:15.537 20614.051 - 20733.207: 96.2748% ( 41) 00:15:15.537 20733.207 - 20852.364: 96.6577% ( 37) 00:15:15.537 20852.364 - 20971.520: 96.9992% ( 33) 00:15:15.537 20971.520 - 21090.676: 97.2786% ( 27) 00:15:15.537 21090.676 - 21209.833: 97.6304% ( 34) 00:15:15.537 21209.833 - 21328.989: 97.8373% ( 20) 00:15:15.537 21328.989 - 21448.145: 98.0650% ( 22) 00:15:15.537 21448.145 - 21567.302: 98.2202% ( 15) 00:15:15.537 21567.302 - 21686.458: 98.3547% ( 13) 00:15:15.537 21686.458 - 21805.615: 98.4582% ( 10) 00:15:15.537 21805.615 - 21924.771: 98.5617% ( 10) 00:15:15.537 21924.771 - 22043.927: 98.6134% ( 5) 00:15:15.537 22043.927 - 22163.084: 98.6548% ( 4) 00:15:15.537 22163.084 - 22282.240: 98.6755% ( 2) 00:15:15.537 29789.091 - 29908.247: 98.6858% ( 1) 00:15:15.537 29908.247 - 30027.404: 98.7065% ( 2) 00:15:15.537 30027.404 - 30146.560: 98.7376% ( 3) 00:15:15.537 30146.560 - 30265.716: 98.7479% ( 1) 00:15:15.537 30265.716 - 30384.873: 98.7790% ( 3) 00:15:15.537 30384.873 - 30504.029: 98.7997% ( 2) 00:15:15.537 30504.029 - 30742.342: 98.8411% ( 4) 00:15:15.537 30742.342 - 30980.655: 98.8825% ( 4) 00:15:15.537 30980.655 - 31218.967: 98.9238% ( 4) 00:15:15.537 31218.967 - 31457.280: 98.9652% ( 4) 00:15:15.537 31457.280 - 31695.593: 99.0066% ( 4) 00:15:15.537 31695.593 - 31933.905: 99.0584% ( 5) 00:15:15.537 31933.905 - 32172.218: 99.0998% ( 4) 00:15:15.537 32172.218 - 32410.531: 99.1411% ( 4) 00:15:15.537 32410.531 - 32648.844: 99.1929% ( 5) 00:15:15.537 32648.844 - 32887.156: 99.2343% ( 4) 00:15:15.538 32887.156 - 33125.469: 99.2964% ( 6) 00:15:15.538 33125.469 - 33363.782: 99.3274% ( 3) 00:15:15.538 33363.782 - 33602.095: 99.3377% ( 1) 00:15:15.538 41466.415 - 41704.727: 99.3584% ( 2) 00:15:15.538 41704.727 - 41943.040: 99.4102% ( 5) 00:15:15.538 41943.040 - 42181.353: 99.4516% ( 4) 00:15:15.538 42181.353 - 42419.665: 99.5033% ( 5) 00:15:15.538 42419.665 - 42657.978: 99.5550% ( 5) 00:15:15.538 42657.978 - 42896.291: 99.6068% ( 5) 00:15:15.538 42896.291 - 43134.604: 99.6378% ( 3) 00:15:15.538 43134.604 - 43372.916: 99.6792% ( 4) 00:15:15.538 43372.916 - 43611.229: 99.7206% ( 4) 00:15:15.538 43611.229 - 43849.542: 99.7724% ( 5) 00:15:15.538 43849.542 - 44087.855: 99.8137% ( 4) 00:15:15.538 44087.855 - 44326.167: 99.8551% ( 4) 00:15:15.538 44326.167 - 44564.480: 99.8965% ( 4) 00:15:15.538 44564.480 - 44802.793: 99.9586% ( 6) 00:15:15.538 44802.793 - 45041.105: 99.9897% ( 3) 00:15:15.538 45041.105 - 45279.418: 100.0000% ( 1) 00:15:15.538 00:15:15.538 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:15:15.538 ============================================================================== 00:15:15.538 Range in us Cumulative IO count 00:15:15.538 9413.353 - 9472.931: 0.1242% ( 12) 00:15:15.538 9472.931 - 9532.509: 0.1966% ( 7) 00:15:15.538 9532.509 - 9592.087: 0.3104% ( 11) 00:15:15.538 9592.087 - 9651.665: 0.4760% ( 16) 00:15:15.538 9651.665 - 9711.244: 0.6519% ( 17) 00:15:15.538 9711.244 - 9770.822: 0.9002% ( 24) 00:15:15.538 9770.822 - 9830.400: 1.3142% ( 40) 00:15:15.538 9830.400 - 9889.978: 1.8936% ( 56) 00:15:15.538 9889.978 - 9949.556: 2.4317% ( 52) 00:15:15.538 9949.556 - 10009.135: 3.0526% ( 60) 00:15:15.538 10009.135 - 10068.713: 3.9839% ( 90) 00:15:15.538 10068.713 - 10128.291: 5.2463% ( 122) 00:15:15.538 10128.291 - 10187.869: 6.4052% ( 112) 00:15:15.538 10187.869 - 10247.447: 7.6676% ( 122) 00:15:15.538 10247.447 - 10307.025: 8.9507% ( 124) 00:15:15.538 10307.025 - 10366.604: 10.3166% ( 132) 00:15:15.538 10366.604 - 10426.182: 11.9309% ( 156) 00:15:15.538 10426.182 - 10485.760: 13.3589% ( 138) 00:15:15.538 10485.760 - 10545.338: 14.9110% ( 150) 00:15:15.538 10545.338 - 10604.916: 16.6494% ( 168) 00:15:15.538 10604.916 - 10664.495: 18.5430% ( 183) 00:15:15.538 10664.495 - 10724.073: 20.6229% ( 201) 00:15:15.538 10724.073 - 10783.651: 22.8787% ( 218) 00:15:15.538 10783.651 - 10843.229: 25.1759% ( 222) 00:15:15.538 10843.229 - 10902.807: 27.3489% ( 210) 00:15:15.538 10902.807 - 10962.385: 29.6254% ( 220) 00:15:15.538 10962.385 - 11021.964: 31.9329% ( 223) 00:15:15.538 11021.964 - 11081.542: 34.2612% ( 225) 00:15:15.538 11081.542 - 11141.120: 36.2583% ( 193) 00:15:15.538 11141.120 - 11200.698: 38.1623% ( 184) 00:15:15.538 11200.698 - 11260.276: 39.9627% ( 174) 00:15:15.538 11260.276 - 11319.855: 41.6598% ( 164) 00:15:15.538 11319.855 - 11379.433: 43.2430% ( 153) 00:15:15.538 11379.433 - 11439.011: 44.8365% ( 154) 00:15:15.538 11439.011 - 11498.589: 46.4404% ( 155) 00:15:15.538 11498.589 - 11558.167: 48.0753% ( 158) 00:15:15.538 11558.167 - 11617.745: 49.7310% ( 160) 00:15:15.538 11617.745 - 11677.324: 51.2107% ( 143) 00:15:15.538 11677.324 - 11736.902: 52.6800% ( 142) 00:15:15.538 11736.902 - 11796.480: 54.0770% ( 135) 00:15:15.538 11796.480 - 11856.058: 55.4222% ( 130) 00:15:15.538 11856.058 - 11915.636: 56.7363% ( 127) 00:15:15.538 11915.636 - 11975.215: 57.9056% ( 113) 00:15:15.538 11975.215 - 12034.793: 58.9300% ( 99) 00:15:15.538 12034.793 - 12094.371: 59.8820% ( 92) 00:15:15.538 12094.371 - 12153.949: 60.7823% ( 87) 00:15:15.538 12153.949 - 12213.527: 61.7032% ( 89) 00:15:15.538 12213.527 - 12273.105: 62.6966% ( 96) 00:15:15.538 12273.105 - 12332.684: 63.4830% ( 76) 00:15:15.538 12332.684 - 12392.262: 64.1556% ( 65) 00:15:15.538 12392.262 - 12451.840: 64.7454% ( 57) 00:15:15.538 12451.840 - 12511.418: 65.3767% ( 61) 00:15:15.538 12511.418 - 12570.996: 65.8630% ( 47) 00:15:15.538 12570.996 - 12630.575: 66.3804% ( 50) 00:15:15.538 12630.575 - 12690.153: 66.8046% ( 41) 00:15:15.538 12690.153 - 12749.731: 67.2806% ( 46) 00:15:15.538 12749.731 - 12809.309: 67.6842% ( 39) 00:15:15.538 12809.309 - 12868.887: 67.9946% ( 30) 00:15:15.538 12868.887 - 12928.465: 68.2637% ( 26) 00:15:15.538 12928.465 - 12988.044: 68.5844% ( 31) 00:15:15.538 12988.044 - 13047.622: 68.8742% ( 28) 00:15:15.538 13047.622 - 13107.200: 69.2260% ( 34) 00:15:15.538 13107.200 - 13166.778: 69.6192% ( 38) 00:15:15.538 13166.778 - 13226.356: 70.0745% ( 44) 00:15:15.538 13226.356 - 13285.935: 70.3642% ( 28) 00:15:15.538 13285.935 - 13345.513: 70.7057% ( 33) 00:15:15.538 13345.513 - 13405.091: 71.0368% ( 32) 00:15:15.538 13405.091 - 13464.669: 71.4507% ( 40) 00:15:15.538 13464.669 - 13524.247: 71.8543% ( 39) 00:15:15.538 13524.247 - 13583.825: 72.3303% ( 46) 00:15:15.538 13583.825 - 13643.404: 72.7028% ( 36) 00:15:15.538 13643.404 - 13702.982: 73.0443% ( 33) 00:15:15.538 13702.982 - 13762.560: 73.4272% ( 37) 00:15:15.538 13762.560 - 13822.138: 73.7583% ( 32) 00:15:15.538 13822.138 - 13881.716: 74.2343% ( 46) 00:15:15.538 13881.716 - 13941.295: 74.6275% ( 38) 00:15:15.538 13941.295 - 14000.873: 74.9793% ( 34) 00:15:15.538 14000.873 - 14060.451: 75.3104% ( 32) 00:15:15.538 14060.451 - 14120.029: 75.5691% ( 25) 00:15:15.538 14120.029 - 14179.607: 75.9934% ( 41) 00:15:15.538 14179.607 - 14239.185: 76.3245% ( 32) 00:15:15.538 14239.185 - 14298.764: 76.6039% ( 27) 00:15:15.538 14298.764 - 14358.342: 76.9143% ( 30) 00:15:15.538 14358.342 - 14417.920: 77.2144% ( 29) 00:15:15.538 14417.920 - 14477.498: 77.4627% ( 24) 00:15:15.538 14477.498 - 14537.076: 77.7214% ( 25) 00:15:15.538 14537.076 - 14596.655: 77.9387% ( 21) 00:15:15.538 14596.655 - 14656.233: 78.1353% ( 19) 00:15:15.538 14656.233 - 14715.811: 78.3216% ( 18) 00:15:15.538 14715.811 - 14775.389: 78.5079% ( 18) 00:15:15.538 14775.389 - 14834.967: 78.6838% ( 17) 00:15:15.538 14834.967 - 14894.545: 78.8597% ( 17) 00:15:15.538 14894.545 - 14954.124: 79.0770% ( 21) 00:15:15.538 14954.124 - 15013.702: 79.3150% ( 23) 00:15:15.538 15013.702 - 15073.280: 79.5116% ( 19) 00:15:15.538 15073.280 - 15132.858: 79.7185% ( 20) 00:15:15.538 15132.858 - 15192.436: 79.8738% ( 15) 00:15:15.538 15192.436 - 15252.015: 79.9876% ( 11) 00:15:15.538 15252.015 - 15371.171: 80.2152% ( 22) 00:15:15.538 15371.171 - 15490.327: 80.4532% ( 23) 00:15:15.538 15490.327 - 15609.484: 80.6084% ( 15) 00:15:15.538 15609.484 - 15728.640: 80.7947% ( 18) 00:15:15.538 15728.640 - 15847.796: 80.9396% ( 14) 00:15:15.538 15847.796 - 15966.953: 81.1155% ( 17) 00:15:15.538 15966.953 - 16086.109: 81.3017% ( 18) 00:15:15.538 16086.109 - 16205.265: 81.5087% ( 20) 00:15:15.538 16205.265 - 16324.422: 81.6639% ( 15) 00:15:15.538 16324.422 - 16443.578: 81.7984% ( 13) 00:15:15.538 16443.578 - 16562.735: 81.9433% ( 14) 00:15:15.538 16562.735 - 16681.891: 82.0571% ( 11) 00:15:15.538 16681.891 - 16801.047: 82.1606% ( 10) 00:15:15.538 16801.047 - 16920.204: 82.2848% ( 12) 00:15:15.538 16920.204 - 17039.360: 82.4607% ( 17) 00:15:15.538 17039.360 - 17158.516: 82.6159% ( 15) 00:15:15.538 17158.516 - 17277.673: 82.8125% ( 19) 00:15:15.538 17277.673 - 17396.829: 82.9781% ( 16) 00:15:15.538 17396.829 - 17515.985: 83.1540% ( 17) 00:15:15.538 17515.985 - 17635.142: 83.3299% ( 17) 00:15:15.538 17635.142 - 17754.298: 83.5886% ( 25) 00:15:15.538 17754.298 - 17873.455: 83.8990% ( 30) 00:15:15.538 17873.455 - 17992.611: 84.0956% ( 19) 00:15:15.538 17992.611 - 18111.767: 84.3233% ( 22) 00:15:15.538 18111.767 - 18230.924: 84.6337% ( 30) 00:15:15.538 18230.924 - 18350.080: 85.1718% ( 52) 00:15:15.538 18350.080 - 18469.236: 85.9478% ( 75) 00:15:15.538 18469.236 - 18588.393: 86.7239% ( 75) 00:15:15.538 18588.393 - 18707.549: 87.3448% ( 60) 00:15:15.538 18707.549 - 18826.705: 88.0381% ( 67) 00:15:15.538 18826.705 - 18945.862: 88.8245% ( 76) 00:15:15.538 18945.862 - 19065.018: 89.5799% ( 73) 00:15:15.538 19065.018 - 19184.175: 90.2939% ( 69) 00:15:15.538 19184.175 - 19303.331: 90.9768% ( 66) 00:15:15.538 19303.331 - 19422.487: 91.6184% ( 62) 00:15:15.538 19422.487 - 19541.644: 92.2496% ( 61) 00:15:15.538 19541.644 - 19660.800: 92.8291% ( 56) 00:15:15.538 19660.800 - 19779.956: 93.3464% ( 50) 00:15:15.538 19779.956 - 19899.113: 93.8638% ( 50) 00:15:15.538 19899.113 - 20018.269: 94.3088% ( 43) 00:15:15.538 20018.269 - 20137.425: 94.7848% ( 46) 00:15:15.538 20137.425 - 20256.582: 95.2090% ( 41) 00:15:15.538 20256.582 - 20375.738: 95.6333% ( 41) 00:15:15.538 20375.738 - 20494.895: 96.0472% ( 40) 00:15:15.538 20494.895 - 20614.051: 96.4404% ( 38) 00:15:15.538 20614.051 - 20733.207: 96.8647% ( 41) 00:15:15.538 20733.207 - 20852.364: 97.2475% ( 37) 00:15:15.538 20852.364 - 20971.520: 97.5683% ( 31) 00:15:15.538 20971.520 - 21090.676: 97.8580% ( 28) 00:15:15.538 21090.676 - 21209.833: 98.1167% ( 25) 00:15:15.538 21209.833 - 21328.989: 98.3340% ( 21) 00:15:15.538 21328.989 - 21448.145: 98.4892% ( 15) 00:15:15.538 21448.145 - 21567.302: 98.5617% ( 7) 00:15:15.538 21567.302 - 21686.458: 98.6134% ( 5) 00:15:15.538 21686.458 - 21805.615: 98.6445% ( 3) 00:15:15.538 21805.615 - 21924.771: 98.6755% ( 3) 00:15:15.538 28954.996 - 29074.153: 98.6962% ( 2) 00:15:15.538 29074.153 - 29193.309: 98.7479% ( 5) 00:15:15.538 29193.309 - 29312.465: 98.8204% ( 7) 00:15:15.538 29312.465 - 29431.622: 98.8618% ( 4) 00:15:15.538 29431.622 - 29550.778: 98.9031% ( 4) 00:15:15.538 29550.778 - 29669.935: 98.9549% ( 5) 00:15:15.538 29669.935 - 29789.091: 99.0066% ( 5) 00:15:15.538 29789.091 - 29908.247: 99.0377% ( 3) 00:15:15.538 29908.247 - 30027.404: 99.0584% ( 2) 00:15:15.539 30027.404 - 30146.560: 99.0687% ( 1) 00:15:15.539 30146.560 - 30265.716: 99.0894% ( 2) 00:15:15.539 30265.716 - 30384.873: 99.0998% ( 1) 00:15:15.539 30384.873 - 30504.029: 99.1204% ( 2) 00:15:15.539 30504.029 - 30742.342: 99.1618% ( 4) 00:15:15.539 30742.342 - 30980.655: 99.1929% ( 3) 00:15:15.539 30980.655 - 31218.967: 99.2343% ( 4) 00:15:15.539 31218.967 - 31457.280: 99.2860% ( 5) 00:15:15.539 31457.280 - 31695.593: 99.3377% ( 5) 00:15:15.539 40036.538 - 40274.851: 99.3895% ( 5) 00:15:15.539 40274.851 - 40513.164: 99.4412% ( 5) 00:15:15.539 40513.164 - 40751.476: 99.4826% ( 4) 00:15:15.539 40751.476 - 40989.789: 99.5344% ( 5) 00:15:15.539 40989.789 - 41228.102: 99.5757% ( 4) 00:15:15.539 41228.102 - 41466.415: 99.6275% ( 5) 00:15:15.539 41466.415 - 41704.727: 99.6792% ( 5) 00:15:15.539 41704.727 - 41943.040: 99.7310% ( 5) 00:15:15.539 41943.040 - 42181.353: 99.7827% ( 5) 00:15:15.539 42181.353 - 42419.665: 99.8241% ( 4) 00:15:15.539 42419.665 - 42657.978: 99.8758% ( 5) 00:15:15.539 42657.978 - 42896.291: 99.9276% ( 5) 00:15:15.539 42896.291 - 43134.604: 99.9793% ( 5) 00:15:15.539 43134.604 - 43372.916: 100.0000% ( 2) 00:15:15.539 00:15:15.539 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:15:15.539 ============================================================================== 00:15:15.539 Range in us Cumulative IO count 00:15:15.539 9532.509 - 9592.087: 0.0103% ( 1) 00:15:15.539 9592.087 - 9651.665: 0.0828% ( 7) 00:15:15.539 9651.665 - 9711.244: 0.2380% ( 15) 00:15:15.539 9711.244 - 9770.822: 0.5691% ( 32) 00:15:15.539 9770.822 - 9830.400: 1.0348% ( 45) 00:15:15.539 9830.400 - 9889.978: 1.6763% ( 62) 00:15:15.539 9889.978 - 9949.556: 2.5248% ( 82) 00:15:15.539 9949.556 - 10009.135: 3.6424% ( 108) 00:15:15.539 10009.135 - 10068.713: 4.7185% ( 104) 00:15:15.539 10068.713 - 10128.291: 5.9396% ( 118) 00:15:15.539 10128.291 - 10187.869: 7.2848% ( 130) 00:15:15.539 10187.869 - 10247.447: 8.5575% ( 123) 00:15:15.539 10247.447 - 10307.025: 9.7786% ( 118) 00:15:15.539 10307.025 - 10366.604: 10.9582% ( 114) 00:15:15.539 10366.604 - 10426.182: 12.3344% ( 133) 00:15:15.539 10426.182 - 10485.760: 13.7417% ( 136) 00:15:15.539 10485.760 - 10545.338: 15.2939% ( 150) 00:15:15.539 10545.338 - 10604.916: 17.0530% ( 170) 00:15:15.539 10604.916 - 10664.495: 19.0501% ( 193) 00:15:15.539 10664.495 - 10724.073: 21.2334% ( 211) 00:15:15.539 10724.073 - 10783.651: 23.3133% ( 201) 00:15:15.539 10783.651 - 10843.229: 25.6312% ( 224) 00:15:15.539 10843.229 - 10902.807: 27.9594% ( 225) 00:15:15.539 10902.807 - 10962.385: 30.1842% ( 215) 00:15:15.539 10962.385 - 11021.964: 32.4607% ( 220) 00:15:15.539 11021.964 - 11081.542: 34.4888% ( 196) 00:15:15.539 11081.542 - 11141.120: 36.4238% ( 187) 00:15:15.539 11141.120 - 11200.698: 38.3175% ( 183) 00:15:15.539 11200.698 - 11260.276: 39.9834% ( 161) 00:15:15.539 11260.276 - 11319.855: 41.6908% ( 165) 00:15:15.539 11319.855 - 11379.433: 43.4603% ( 171) 00:15:15.539 11379.433 - 11439.011: 45.1262% ( 161) 00:15:15.539 11439.011 - 11498.589: 46.5335% ( 136) 00:15:15.539 11498.589 - 11558.167: 48.1167% ( 153) 00:15:15.539 11558.167 - 11617.745: 49.7724% ( 160) 00:15:15.539 11617.745 - 11677.324: 51.4487% ( 162) 00:15:15.539 11677.324 - 11736.902: 52.8974% ( 140) 00:15:15.539 11736.902 - 11796.480: 54.2839% ( 134) 00:15:15.539 11796.480 - 11856.058: 55.5153% ( 119) 00:15:15.539 11856.058 - 11915.636: 56.8191% ( 126) 00:15:15.539 11915.636 - 11975.215: 57.9367% ( 108) 00:15:15.539 11975.215 - 12034.793: 59.0439% ( 107) 00:15:15.539 12034.793 - 12094.371: 59.9959% ( 92) 00:15:15.539 12094.371 - 12153.949: 60.9478% ( 92) 00:15:15.539 12153.949 - 12213.527: 61.8481% ( 87) 00:15:15.539 12213.527 - 12273.105: 62.5621% ( 69) 00:15:15.539 12273.105 - 12332.684: 63.2450% ( 66) 00:15:15.539 12332.684 - 12392.262: 63.8245% ( 56) 00:15:15.539 12392.262 - 12451.840: 64.3626% ( 52) 00:15:15.539 12451.840 - 12511.418: 64.9524% ( 57) 00:15:15.539 12511.418 - 12570.996: 65.4594% ( 49) 00:15:15.539 12570.996 - 12630.575: 66.0182% ( 54) 00:15:15.539 12630.575 - 12690.153: 66.4011% ( 37) 00:15:15.539 12690.153 - 12749.731: 66.8357% ( 42) 00:15:15.539 12749.731 - 12809.309: 67.2703% ( 42) 00:15:15.539 12809.309 - 12868.887: 67.6531% ( 37) 00:15:15.539 12868.887 - 12928.465: 67.9532% ( 29) 00:15:15.539 12928.465 - 12988.044: 68.2119% ( 25) 00:15:15.539 12988.044 - 13047.622: 68.4603% ( 24) 00:15:15.539 13047.622 - 13107.200: 68.7086% ( 24) 00:15:15.539 13107.200 - 13166.778: 69.0397% ( 32) 00:15:15.539 13166.778 - 13226.356: 69.2881% ( 24) 00:15:15.539 13226.356 - 13285.935: 69.7330% ( 43) 00:15:15.539 13285.935 - 13345.513: 70.1055% ( 36) 00:15:15.539 13345.513 - 13405.091: 70.4988% ( 38) 00:15:15.539 13405.091 - 13464.669: 70.9954% ( 48) 00:15:15.539 13464.669 - 13524.247: 71.4714% ( 46) 00:15:15.539 13524.247 - 13583.825: 71.9371% ( 45) 00:15:15.539 13583.825 - 13643.404: 72.3303% ( 38) 00:15:15.539 13643.404 - 13702.982: 72.7132% ( 37) 00:15:15.539 13702.982 - 13762.560: 73.1374% ( 41) 00:15:15.539 13762.560 - 13822.138: 73.5513% ( 40) 00:15:15.539 13822.138 - 13881.716: 73.9652% ( 40) 00:15:15.539 13881.716 - 13941.295: 74.2964% ( 32) 00:15:15.539 13941.295 - 14000.873: 74.6171% ( 31) 00:15:15.539 14000.873 - 14060.451: 74.9897% ( 36) 00:15:15.539 14060.451 - 14120.029: 75.3725% ( 37) 00:15:15.539 14120.029 - 14179.607: 75.6726% ( 29) 00:15:15.539 14179.607 - 14239.185: 75.9623% ( 28) 00:15:15.539 14239.185 - 14298.764: 76.2521% ( 28) 00:15:15.539 14298.764 - 14358.342: 76.6246% ( 36) 00:15:15.539 14358.342 - 14417.920: 77.0075% ( 37) 00:15:15.539 14417.920 - 14477.498: 77.3075% ( 29) 00:15:15.539 14477.498 - 14537.076: 77.5662% ( 25) 00:15:15.539 14537.076 - 14596.655: 77.8146% ( 24) 00:15:15.539 14596.655 - 14656.233: 78.0940% ( 27) 00:15:15.539 14656.233 - 14715.811: 78.3940% ( 29) 00:15:15.539 14715.811 - 14775.389: 78.6527% ( 25) 00:15:15.539 14775.389 - 14834.967: 78.9218% ( 26) 00:15:15.539 14834.967 - 14894.545: 79.1598% ( 23) 00:15:15.539 14894.545 - 14954.124: 79.3874% ( 22) 00:15:15.539 14954.124 - 15013.702: 79.5840% ( 19) 00:15:15.539 15013.702 - 15073.280: 79.7496% ( 16) 00:15:15.539 15073.280 - 15132.858: 79.8841% ( 13) 00:15:15.539 15132.858 - 15192.436: 80.0290% ( 14) 00:15:15.539 15192.436 - 15252.015: 80.1531% ( 12) 00:15:15.539 15252.015 - 15371.171: 80.4222% ( 26) 00:15:15.539 15371.171 - 15490.327: 80.5877% ( 16) 00:15:15.539 15490.327 - 15609.484: 80.7326% ( 14) 00:15:15.539 15609.484 - 15728.640: 80.8257% ( 9) 00:15:15.539 15728.640 - 15847.796: 81.0017% ( 17) 00:15:15.539 15847.796 - 15966.953: 81.1672% ( 16) 00:15:15.539 15966.953 - 16086.109: 81.3121% ( 14) 00:15:15.539 16086.109 - 16205.265: 81.4776% ( 16) 00:15:15.539 16205.265 - 16324.422: 81.6536% ( 17) 00:15:15.539 16324.422 - 16443.578: 81.8191% ( 16) 00:15:15.539 16443.578 - 16562.735: 81.9640% ( 14) 00:15:15.539 16562.735 - 16681.891: 82.0468% ( 8) 00:15:15.539 16681.891 - 16801.047: 82.1709% ( 12) 00:15:15.539 16801.047 - 16920.204: 82.2848% ( 11) 00:15:15.539 16920.204 - 17039.360: 82.3986% ( 11) 00:15:15.539 17039.360 - 17158.516: 82.5952% ( 19) 00:15:15.539 17158.516 - 17277.673: 82.8022% ( 20) 00:15:15.539 17277.673 - 17396.829: 82.9677% ( 16) 00:15:15.539 17396.829 - 17515.985: 83.1126% ( 14) 00:15:15.539 17515.985 - 17635.142: 83.2575% ( 14) 00:15:15.539 17635.142 - 17754.298: 83.5472% ( 28) 00:15:15.539 17754.298 - 17873.455: 83.8473% ( 29) 00:15:15.539 17873.455 - 17992.611: 84.2612% ( 40) 00:15:15.539 17992.611 - 18111.767: 84.6647% ( 39) 00:15:15.539 18111.767 - 18230.924: 85.1511% ( 47) 00:15:15.539 18230.924 - 18350.080: 85.6271% ( 46) 00:15:15.539 18350.080 - 18469.236: 86.3204% ( 67) 00:15:15.539 18469.236 - 18588.393: 87.0240% ( 68) 00:15:15.539 18588.393 - 18707.549: 87.5724% ( 53) 00:15:15.539 18707.549 - 18826.705: 88.2554% ( 66) 00:15:15.539 18826.705 - 18945.862: 88.9176% ( 64) 00:15:15.539 18945.862 - 19065.018: 89.5695% ( 63) 00:15:15.539 19065.018 - 19184.175: 90.3042% ( 71) 00:15:15.539 19184.175 - 19303.331: 90.9561% ( 63) 00:15:15.539 19303.331 - 19422.487: 91.6908% ( 71) 00:15:15.539 19422.487 - 19541.644: 92.3634% ( 65) 00:15:15.539 19541.644 - 19660.800: 93.0671% ( 68) 00:15:15.539 19660.800 - 19779.956: 93.6569% ( 57) 00:15:15.539 19779.956 - 19899.113: 94.2260% ( 55) 00:15:15.539 19899.113 - 20018.269: 94.7537% ( 51) 00:15:15.539 20018.269 - 20137.425: 95.2401% ( 47) 00:15:15.539 20137.425 - 20256.582: 95.6333% ( 38) 00:15:15.539 20256.582 - 20375.738: 96.0161% ( 37) 00:15:15.539 20375.738 - 20494.895: 96.3990% ( 37) 00:15:15.539 20494.895 - 20614.051: 96.7508% ( 34) 00:15:15.539 20614.051 - 20733.207: 97.1026% ( 34) 00:15:15.539 20733.207 - 20852.364: 97.4131% ( 30) 00:15:15.539 20852.364 - 20971.520: 97.7132% ( 29) 00:15:15.539 20971.520 - 21090.676: 97.9615% ( 24) 00:15:15.539 21090.676 - 21209.833: 98.1685% ( 20) 00:15:15.539 21209.833 - 21328.989: 98.3030% ( 13) 00:15:15.539 21328.989 - 21448.145: 98.4375% ( 13) 00:15:15.539 21448.145 - 21567.302: 98.5203% ( 8) 00:15:15.539 21567.302 - 21686.458: 98.5927% ( 7) 00:15:15.539 21686.458 - 21805.615: 98.6238% ( 3) 00:15:15.539 21805.615 - 21924.771: 98.6445% ( 2) 00:15:15.539 21924.771 - 22043.927: 98.6651% ( 2) 00:15:15.539 22043.927 - 22163.084: 98.6755% ( 1) 00:15:15.539 26571.869 - 26691.025: 98.6858% ( 1) 00:15:15.539 26691.025 - 26810.182: 98.7169% ( 3) 00:15:15.539 26810.182 - 26929.338: 98.7376% ( 2) 00:15:15.539 26929.338 - 27048.495: 98.7583% ( 2) 00:15:15.539 27048.495 - 27167.651: 98.7893% ( 3) 00:15:15.540 27167.651 - 27286.807: 98.8100% ( 2) 00:15:15.540 27286.807 - 27405.964: 98.8307% ( 2) 00:15:15.540 27405.964 - 27525.120: 98.8618% ( 3) 00:15:15.540 27525.120 - 27644.276: 98.8825% ( 2) 00:15:15.540 27644.276 - 27763.433: 98.9031% ( 2) 00:15:15.540 27763.433 - 27882.589: 98.9238% ( 2) 00:15:15.540 27882.589 - 28001.745: 98.9445% ( 2) 00:15:15.540 28001.745 - 28120.902: 98.9756% ( 3) 00:15:15.540 28120.902 - 28240.058: 98.9963% ( 2) 00:15:15.540 28240.058 - 28359.215: 99.0170% ( 2) 00:15:15.540 28359.215 - 28478.371: 99.0480% ( 3) 00:15:15.540 28478.371 - 28597.527: 99.0687% ( 2) 00:15:15.540 28597.527 - 28716.684: 99.0998% ( 3) 00:15:15.540 28716.684 - 28835.840: 99.1204% ( 2) 00:15:15.540 28835.840 - 28954.996: 99.1411% ( 2) 00:15:15.540 28954.996 - 29074.153: 99.1618% ( 2) 00:15:15.540 29074.153 - 29193.309: 99.1929% ( 3) 00:15:15.540 29193.309 - 29312.465: 99.2136% ( 2) 00:15:15.540 29312.465 - 29431.622: 99.2343% ( 2) 00:15:15.540 29431.622 - 29550.778: 99.2653% ( 3) 00:15:15.540 29550.778 - 29669.935: 99.2860% ( 2) 00:15:15.540 29669.935 - 29789.091: 99.3171% ( 3) 00:15:15.540 29789.091 - 29908.247: 99.3377% ( 2) 00:15:15.540 38368.349 - 38606.662: 99.3791% ( 4) 00:15:15.540 38606.662 - 38844.975: 99.4205% ( 4) 00:15:15.540 38844.975 - 39083.287: 99.4723% ( 5) 00:15:15.540 39083.287 - 39321.600: 99.5137% ( 4) 00:15:15.540 39321.600 - 39559.913: 99.5654% ( 5) 00:15:15.540 39559.913 - 39798.225: 99.6171% ( 5) 00:15:15.540 39798.225 - 40036.538: 99.6689% ( 5) 00:15:15.540 40036.538 - 40274.851: 99.7206% ( 5) 00:15:15.540 40274.851 - 40513.164: 99.7724% ( 5) 00:15:15.540 40513.164 - 40751.476: 99.8241% ( 5) 00:15:15.540 40751.476 - 40989.789: 99.8655% ( 4) 00:15:15.540 40989.789 - 41228.102: 99.9172% ( 5) 00:15:15.540 41228.102 - 41466.415: 99.9690% ( 5) 00:15:15.540 41466.415 - 41704.727: 100.0000% ( 3) 00:15:15.540 00:15:15.540 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:15:15.540 ============================================================================== 00:15:15.540 Range in us Cumulative IO count 00:15:15.540 9592.087 - 9651.665: 0.0931% ( 9) 00:15:15.540 9651.665 - 9711.244: 0.3311% ( 23) 00:15:15.540 9711.244 - 9770.822: 0.7761% ( 43) 00:15:15.540 9770.822 - 9830.400: 1.2417% ( 45) 00:15:15.540 9830.400 - 9889.978: 1.7281% ( 47) 00:15:15.540 9889.978 - 9949.556: 2.4421% ( 69) 00:15:15.540 9949.556 - 10009.135: 3.2595% ( 79) 00:15:15.540 10009.135 - 10068.713: 4.2529% ( 96) 00:15:15.540 10068.713 - 10128.291: 5.3394% ( 105) 00:15:15.540 10128.291 - 10187.869: 6.4156% ( 104) 00:15:15.540 10187.869 - 10247.447: 7.6883% ( 123) 00:15:15.540 10247.447 - 10307.025: 8.9714% ( 124) 00:15:15.540 10307.025 - 10366.604: 10.3063% ( 129) 00:15:15.540 10366.604 - 10426.182: 11.6825% ( 133) 00:15:15.540 10426.182 - 10485.760: 13.0691% ( 134) 00:15:15.540 10485.760 - 10545.338: 14.6834% ( 156) 00:15:15.540 10545.338 - 10604.916: 16.5046% ( 176) 00:15:15.540 10604.916 - 10664.495: 18.5120% ( 194) 00:15:15.540 10664.495 - 10724.073: 20.7471% ( 216) 00:15:15.540 10724.073 - 10783.651: 22.9201% ( 210) 00:15:15.540 10783.651 - 10843.229: 24.9897% ( 200) 00:15:15.540 10843.229 - 10902.807: 27.1420% ( 208) 00:15:15.540 10902.807 - 10962.385: 29.4185% ( 220) 00:15:15.540 10962.385 - 11021.964: 31.5397% ( 205) 00:15:15.540 11021.964 - 11081.542: 33.6610% ( 205) 00:15:15.540 11081.542 - 11141.120: 35.8858% ( 215) 00:15:15.540 11141.120 - 11200.698: 37.8932% ( 194) 00:15:15.540 11200.698 - 11260.276: 39.8489% ( 189) 00:15:15.540 11260.276 - 11319.855: 41.6494% ( 174) 00:15:15.540 11319.855 - 11379.433: 43.5224% ( 181) 00:15:15.540 11379.433 - 11439.011: 45.3746% ( 179) 00:15:15.540 11439.011 - 11498.589: 47.2165% ( 178) 00:15:15.540 11498.589 - 11558.167: 48.8411% ( 157) 00:15:15.540 11558.167 - 11617.745: 50.4863% ( 159) 00:15:15.540 11617.745 - 11677.324: 52.1420% ( 160) 00:15:15.540 11677.324 - 11736.902: 53.6010% ( 141) 00:15:15.540 11736.902 - 11796.480: 54.8841% ( 124) 00:15:15.540 11796.480 - 11856.058: 56.0844% ( 116) 00:15:15.540 11856.058 - 11915.636: 57.1813% ( 106) 00:15:15.540 11915.636 - 11975.215: 58.2471% ( 103) 00:15:15.540 11975.215 - 12034.793: 59.3026% ( 102) 00:15:15.540 12034.793 - 12094.371: 60.2959% ( 96) 00:15:15.540 12094.371 - 12153.949: 61.1238% ( 80) 00:15:15.540 12153.949 - 12213.527: 61.8895% ( 74) 00:15:15.540 12213.527 - 12273.105: 62.5517% ( 64) 00:15:15.540 12273.105 - 12332.684: 63.2036% ( 63) 00:15:15.540 12332.684 - 12392.262: 63.9383% ( 71) 00:15:15.540 12392.262 - 12451.840: 64.6420% ( 68) 00:15:15.540 12451.840 - 12511.418: 65.2732% ( 61) 00:15:15.540 12511.418 - 12570.996: 65.8630% ( 57) 00:15:15.540 12570.996 - 12630.575: 66.2355% ( 36) 00:15:15.540 12630.575 - 12690.153: 66.6391% ( 39) 00:15:15.540 12690.153 - 12749.731: 66.9805% ( 33) 00:15:15.540 12749.731 - 12809.309: 67.2599% ( 27) 00:15:15.540 12809.309 - 12868.887: 67.5704% ( 30) 00:15:15.540 12868.887 - 12928.465: 67.8291% ( 25) 00:15:15.540 12928.465 - 12988.044: 67.9946% ( 16) 00:15:15.540 12988.044 - 13047.622: 68.2947% ( 29) 00:15:15.540 13047.622 - 13107.200: 68.5844% ( 28) 00:15:15.540 13107.200 - 13166.778: 68.9673% ( 37) 00:15:15.540 13166.778 - 13226.356: 69.3916% ( 41) 00:15:15.540 13226.356 - 13285.935: 69.8469% ( 44) 00:15:15.540 13285.935 - 13345.513: 70.2401% ( 38) 00:15:15.540 13345.513 - 13405.091: 70.6333% ( 38) 00:15:15.540 13405.091 - 13464.669: 71.0782% ( 43) 00:15:15.540 13464.669 - 13524.247: 71.5025% ( 41) 00:15:15.540 13524.247 - 13583.825: 71.9681% ( 45) 00:15:15.540 13583.825 - 13643.404: 72.3510% ( 37) 00:15:15.540 13643.404 - 13702.982: 72.6925% ( 33) 00:15:15.540 13702.982 - 13762.560: 73.1685% ( 46) 00:15:15.540 13762.560 - 13822.138: 73.6445% ( 46) 00:15:15.540 13822.138 - 13881.716: 74.1101% ( 45) 00:15:15.540 13881.716 - 13941.295: 74.5757% ( 45) 00:15:15.540 13941.295 - 14000.873: 74.9586% ( 37) 00:15:15.540 14000.873 - 14060.451: 75.4967% ( 52) 00:15:15.540 14060.451 - 14120.029: 75.8485% ( 34) 00:15:15.540 14120.029 - 14179.607: 76.2210% ( 36) 00:15:15.540 14179.607 - 14239.185: 76.5418% ( 31) 00:15:15.540 14239.185 - 14298.764: 76.8419% ( 29) 00:15:15.540 14298.764 - 14358.342: 77.1316% ( 28) 00:15:15.540 14358.342 - 14417.920: 77.3903% ( 25) 00:15:15.540 14417.920 - 14477.498: 77.6283% ( 23) 00:15:15.540 14477.498 - 14537.076: 77.8456% ( 21) 00:15:15.540 14537.076 - 14596.655: 78.1250% ( 27) 00:15:15.540 14596.655 - 14656.233: 78.3216% ( 19) 00:15:15.540 14656.233 - 14715.811: 78.5493% ( 22) 00:15:15.540 14715.811 - 14775.389: 78.7666% ( 21) 00:15:15.540 14775.389 - 14834.967: 79.0046% ( 23) 00:15:15.540 14834.967 - 14894.545: 79.2219% ( 21) 00:15:15.540 14894.545 - 14954.124: 79.4081% ( 18) 00:15:15.540 14954.124 - 15013.702: 79.5633% ( 15) 00:15:15.540 15013.702 - 15073.280: 79.7703% ( 20) 00:15:15.540 15073.280 - 15132.858: 79.9255% ( 15) 00:15:15.540 15132.858 - 15192.436: 80.0393% ( 11) 00:15:15.540 15192.436 - 15252.015: 80.1531% ( 11) 00:15:15.540 15252.015 - 15371.171: 80.4118% ( 25) 00:15:15.540 15371.171 - 15490.327: 80.6498% ( 23) 00:15:15.540 15490.327 - 15609.484: 80.7740% ( 12) 00:15:15.540 15609.484 - 15728.640: 80.8982% ( 12) 00:15:15.540 15728.640 - 15847.796: 81.0224% ( 12) 00:15:15.540 15847.796 - 15966.953: 81.0948% ( 7) 00:15:15.540 15966.953 - 16086.109: 81.1776% ( 8) 00:15:15.540 16086.109 - 16205.265: 81.2500% ( 7) 00:15:15.540 16205.265 - 16324.422: 81.2914% ( 4) 00:15:15.540 16324.422 - 16443.578: 81.4156% ( 12) 00:15:15.540 16443.578 - 16562.735: 81.6225% ( 20) 00:15:15.540 16562.735 - 16681.891: 81.7777% ( 15) 00:15:15.540 16681.891 - 16801.047: 81.9123% ( 13) 00:15:15.540 16801.047 - 16920.204: 82.0571% ( 14) 00:15:15.540 16920.204 - 17039.360: 82.2227% ( 16) 00:15:15.540 17039.360 - 17158.516: 82.4296% ( 20) 00:15:15.540 17158.516 - 17277.673: 82.6262% ( 19) 00:15:15.540 17277.673 - 17396.829: 82.8332% ( 20) 00:15:15.540 17396.829 - 17515.985: 83.1022% ( 26) 00:15:15.540 17515.985 - 17635.142: 83.3402% ( 23) 00:15:15.540 17635.142 - 17754.298: 83.7024% ( 35) 00:15:15.540 17754.298 - 17873.455: 84.1784% ( 46) 00:15:15.540 17873.455 - 17992.611: 84.8096% ( 61) 00:15:15.540 17992.611 - 18111.767: 85.2442% ( 42) 00:15:15.540 18111.767 - 18230.924: 85.6788% ( 42) 00:15:15.540 18230.924 - 18350.080: 86.1548% ( 46) 00:15:15.540 18350.080 - 18469.236: 86.6308% ( 46) 00:15:15.540 18469.236 - 18588.393: 87.1896% ( 54) 00:15:15.540 18588.393 - 18707.549: 87.8415% ( 63) 00:15:15.540 18707.549 - 18826.705: 88.5762% ( 71) 00:15:15.540 18826.705 - 18945.862: 89.2695% ( 67) 00:15:15.540 18945.862 - 19065.018: 90.0248% ( 73) 00:15:15.540 19065.018 - 19184.175: 90.6974% ( 65) 00:15:15.540 19184.175 - 19303.331: 91.3183% ( 60) 00:15:15.540 19303.331 - 19422.487: 91.9392% ( 60) 00:15:15.540 19422.487 - 19541.644: 92.6531% ( 69) 00:15:15.540 19541.644 - 19660.800: 93.3568% ( 68) 00:15:15.540 19660.800 - 19779.956: 93.9776% ( 60) 00:15:15.540 19779.956 - 19899.113: 94.4433% ( 45) 00:15:15.540 19899.113 - 20018.269: 94.9503% ( 49) 00:15:15.540 20018.269 - 20137.425: 95.3849% ( 42) 00:15:15.540 20137.425 - 20256.582: 95.7575% ( 36) 00:15:15.540 20256.582 - 20375.738: 96.1300% ( 36) 00:15:15.540 20375.738 - 20494.895: 96.5025% ( 36) 00:15:15.540 20494.895 - 20614.051: 96.8336% ( 32) 00:15:15.540 20614.051 - 20733.207: 97.1647% ( 32) 00:15:15.540 20733.207 - 20852.364: 97.4545% ( 28) 00:15:15.540 20852.364 - 20971.520: 97.7028% ( 24) 00:15:15.540 20971.520 - 21090.676: 97.9615% ( 25) 00:15:15.541 21090.676 - 21209.833: 98.1478% ( 18) 00:15:15.541 21209.833 - 21328.989: 98.2512% ( 10) 00:15:15.541 21328.989 - 21448.145: 98.3858% ( 13) 00:15:15.541 21448.145 - 21567.302: 98.4892% ( 10) 00:15:15.541 21567.302 - 21686.458: 98.5720% ( 8) 00:15:15.541 21686.458 - 21805.615: 98.6238% ( 5) 00:15:15.541 21805.615 - 21924.771: 98.6445% ( 2) 00:15:15.541 21924.771 - 22043.927: 98.6755% ( 3) 00:15:15.541 24784.524 - 24903.680: 98.6858% ( 1) 00:15:15.541 24903.680 - 25022.836: 98.7065% ( 2) 00:15:15.541 25022.836 - 25141.993: 98.7376% ( 3) 00:15:15.541 25141.993 - 25261.149: 98.7583% ( 2) 00:15:15.541 25261.149 - 25380.305: 98.7790% ( 2) 00:15:15.541 25380.305 - 25499.462: 98.8100% ( 3) 00:15:15.541 25499.462 - 25618.618: 98.8307% ( 2) 00:15:15.541 25618.618 - 25737.775: 98.8618% ( 3) 00:15:15.541 25737.775 - 25856.931: 98.8825% ( 2) 00:15:15.541 25856.931 - 25976.087: 98.9135% ( 3) 00:15:15.541 25976.087 - 26095.244: 98.9342% ( 2) 00:15:15.541 26095.244 - 26214.400: 98.9549% ( 2) 00:15:15.541 26214.400 - 26333.556: 98.9859% ( 3) 00:15:15.541 26333.556 - 26452.713: 99.0066% ( 2) 00:15:15.541 26452.713 - 26571.869: 99.0273% ( 2) 00:15:15.541 26571.869 - 26691.025: 99.0584% ( 3) 00:15:15.541 26691.025 - 26810.182: 99.0791% ( 2) 00:15:15.541 26810.182 - 26929.338: 99.1101% ( 3) 00:15:15.541 26929.338 - 27048.495: 99.1308% ( 2) 00:15:15.541 27048.495 - 27167.651: 99.1515% ( 2) 00:15:15.541 27167.651 - 27286.807: 99.1825% ( 3) 00:15:15.541 27286.807 - 27405.964: 99.2032% ( 2) 00:15:15.541 27405.964 - 27525.120: 99.2343% ( 3) 00:15:15.541 27525.120 - 27644.276: 99.2550% ( 2) 00:15:15.541 27644.276 - 27763.433: 99.2860% ( 3) 00:15:15.541 27763.433 - 27882.589: 99.2964% ( 1) 00:15:15.541 27882.589 - 28001.745: 99.3274% ( 3) 00:15:15.541 28001.745 - 28120.902: 99.3377% ( 1) 00:15:15.541 36461.847 - 36700.160: 99.3584% ( 2) 00:15:15.541 36700.160 - 36938.473: 99.4102% ( 5) 00:15:15.541 36938.473 - 37176.785: 99.4619% ( 5) 00:15:15.541 37176.785 - 37415.098: 99.5137% ( 5) 00:15:15.541 37415.098 - 37653.411: 99.5654% ( 5) 00:15:15.541 37653.411 - 37891.724: 99.6171% ( 5) 00:15:15.541 37891.724 - 38130.036: 99.6689% ( 5) 00:15:15.541 38130.036 - 38368.349: 99.7103% ( 4) 00:15:15.541 38368.349 - 38606.662: 99.7620% ( 5) 00:15:15.541 38606.662 - 38844.975: 99.8034% ( 4) 00:15:15.541 38844.975 - 39083.287: 99.8551% ( 5) 00:15:15.541 39083.287 - 39321.600: 99.9069% ( 5) 00:15:15.541 39321.600 - 39559.913: 99.9586% ( 5) 00:15:15.541 39559.913 - 39798.225: 100.0000% ( 4) 00:15:15.541 00:15:15.541 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:15:15.541 ============================================================================== 00:15:15.541 Range in us Cumulative IO count 00:15:15.541 9592.087 - 9651.665: 0.0517% ( 5) 00:15:15.541 9651.665 - 9711.244: 0.1759% ( 12) 00:15:15.541 9711.244 - 9770.822: 0.5277% ( 34) 00:15:15.541 9770.822 - 9830.400: 0.9209% ( 38) 00:15:15.541 9830.400 - 9889.978: 1.5211% ( 58) 00:15:15.541 9889.978 - 9949.556: 2.4317% ( 88) 00:15:15.541 9949.556 - 10009.135: 3.3837% ( 92) 00:15:15.541 10009.135 - 10068.713: 4.1908% ( 78) 00:15:15.541 10068.713 - 10128.291: 5.0704% ( 85) 00:15:15.541 10128.291 - 10187.869: 6.1672% ( 106) 00:15:15.541 10187.869 - 10247.447: 7.4400% ( 123) 00:15:15.541 10247.447 - 10307.025: 8.5161% ( 104) 00:15:15.541 10307.025 - 10366.604: 9.8200% ( 126) 00:15:15.541 10366.604 - 10426.182: 11.1341% ( 127) 00:15:15.541 10426.182 - 10485.760: 12.7173% ( 153) 00:15:15.541 10485.760 - 10545.338: 14.4040% ( 163) 00:15:15.541 10545.338 - 10604.916: 16.0803% ( 162) 00:15:15.541 10604.916 - 10664.495: 18.1188% ( 197) 00:15:15.541 10664.495 - 10724.073: 20.1780% ( 199) 00:15:15.541 10724.073 - 10783.651: 22.2993% ( 205) 00:15:15.541 10783.651 - 10843.229: 24.4930% ( 212) 00:15:15.541 10843.229 - 10902.807: 26.8212% ( 225) 00:15:15.541 10902.807 - 10962.385: 29.2529% ( 235) 00:15:15.541 10962.385 - 11021.964: 31.6329% ( 230) 00:15:15.541 11021.964 - 11081.542: 34.1060% ( 239) 00:15:15.541 11081.542 - 11141.120: 36.2997% ( 212) 00:15:15.541 11141.120 - 11200.698: 38.3278% ( 196) 00:15:15.541 11200.698 - 11260.276: 40.1904% ( 180) 00:15:15.541 11260.276 - 11319.855: 42.0737% ( 182) 00:15:15.541 11319.855 - 11379.433: 43.7500% ( 162) 00:15:15.541 11379.433 - 11439.011: 45.5195% ( 171) 00:15:15.541 11439.011 - 11498.589: 47.3406% ( 176) 00:15:15.541 11498.589 - 11558.167: 49.0273% ( 163) 00:15:15.541 11558.167 - 11617.745: 50.6105% ( 153) 00:15:15.541 11617.745 - 11677.324: 51.9454% ( 129) 00:15:15.541 11677.324 - 11736.902: 53.3423% ( 135) 00:15:15.541 11736.902 - 11796.480: 54.6875% ( 130) 00:15:15.541 11796.480 - 11856.058: 55.9499% ( 122) 00:15:15.541 11856.058 - 11915.636: 57.1296% ( 114) 00:15:15.541 11915.636 - 11975.215: 58.2781% ( 111) 00:15:15.541 11975.215 - 12034.793: 59.2301% ( 92) 00:15:15.541 12034.793 - 12094.371: 60.0786% ( 82) 00:15:15.541 12094.371 - 12153.949: 61.0720% ( 96) 00:15:15.541 12153.949 - 12213.527: 61.8171% ( 72) 00:15:15.541 12213.527 - 12273.105: 62.4793% ( 64) 00:15:15.541 12273.105 - 12332.684: 63.1519% ( 65) 00:15:15.541 12332.684 - 12392.262: 63.7935% ( 62) 00:15:15.541 12392.262 - 12451.840: 64.4764% ( 66) 00:15:15.541 12451.840 - 12511.418: 65.1283% ( 63) 00:15:15.541 12511.418 - 12570.996: 65.6974% ( 55) 00:15:15.541 12570.996 - 12630.575: 66.1838% ( 47) 00:15:15.541 12630.575 - 12690.153: 66.5873% ( 39) 00:15:15.541 12690.153 - 12749.731: 67.0323% ( 43) 00:15:15.541 12749.731 - 12809.309: 67.3945% ( 35) 00:15:15.541 12809.309 - 12868.887: 67.7049% ( 30) 00:15:15.541 12868.887 - 12928.465: 68.0153% ( 30) 00:15:15.541 12928.465 - 12988.044: 68.2637% ( 24) 00:15:15.541 12988.044 - 13047.622: 68.5430% ( 27) 00:15:15.541 13047.622 - 13107.200: 68.9466% ( 39) 00:15:15.541 13107.200 - 13166.778: 69.3916% ( 43) 00:15:15.541 13166.778 - 13226.356: 69.8882% ( 48) 00:15:15.541 13226.356 - 13285.935: 70.3022% ( 40) 00:15:15.541 13285.935 - 13345.513: 70.7368% ( 42) 00:15:15.541 13345.513 - 13405.091: 71.1403% ( 39) 00:15:15.541 13405.091 - 13464.669: 71.5956% ( 44) 00:15:15.541 13464.669 - 13524.247: 71.9681% ( 36) 00:15:15.541 13524.247 - 13583.825: 72.3406% ( 36) 00:15:15.541 13583.825 - 13643.404: 72.8063% ( 45) 00:15:15.541 13643.404 - 13702.982: 73.2616% ( 44) 00:15:15.541 13702.982 - 13762.560: 73.6858% ( 41) 00:15:15.541 13762.560 - 13822.138: 74.0791% ( 38) 00:15:15.541 13822.138 - 13881.716: 74.4516% ( 36) 00:15:15.541 13881.716 - 13941.295: 74.9069% ( 44) 00:15:15.541 13941.295 - 14000.873: 75.2483% ( 33) 00:15:15.541 14000.873 - 14060.451: 75.7243% ( 46) 00:15:15.541 14060.451 - 14120.029: 76.0348% ( 30) 00:15:15.541 14120.029 - 14179.607: 76.2935% ( 25) 00:15:15.541 14179.607 - 14239.185: 76.5625% ( 26) 00:15:15.541 14239.185 - 14298.764: 76.8315% ( 26) 00:15:15.541 14298.764 - 14358.342: 77.0488% ( 21) 00:15:15.541 14358.342 - 14417.920: 77.2972% ( 24) 00:15:15.541 14417.920 - 14477.498: 77.4938% ( 19) 00:15:15.541 14477.498 - 14537.076: 77.7318% ( 23) 00:15:15.541 14537.076 - 14596.655: 77.9284% ( 19) 00:15:15.541 14596.655 - 14656.233: 78.1043% ( 17) 00:15:15.541 14656.233 - 14715.811: 78.3216% ( 21) 00:15:15.541 14715.811 - 14775.389: 78.5286% ( 20) 00:15:15.541 14775.389 - 14834.967: 78.7252% ( 19) 00:15:15.541 14834.967 - 14894.545: 78.9011% ( 17) 00:15:15.541 14894.545 - 14954.124: 79.1184% ( 21) 00:15:15.541 14954.124 - 15013.702: 79.3046% ( 18) 00:15:15.541 15013.702 - 15073.280: 79.4288% ( 12) 00:15:15.541 15073.280 - 15132.858: 79.5426% ( 11) 00:15:15.541 15132.858 - 15192.436: 79.6358% ( 9) 00:15:15.541 15192.436 - 15252.015: 79.7392% ( 10) 00:15:15.541 15252.015 - 15371.171: 79.9462% ( 20) 00:15:15.541 15371.171 - 15490.327: 80.1635% ( 21) 00:15:15.541 15490.327 - 15609.484: 80.3601% ( 19) 00:15:15.541 15609.484 - 15728.640: 80.5153% ( 15) 00:15:15.541 15728.640 - 15847.796: 80.6188% ( 10) 00:15:15.541 15847.796 - 15966.953: 80.7947% ( 17) 00:15:15.541 15966.953 - 16086.109: 80.9603% ( 16) 00:15:15.541 16086.109 - 16205.265: 81.0741% ( 11) 00:15:15.542 16205.265 - 16324.422: 81.1776% ( 10) 00:15:15.542 16324.422 - 16443.578: 81.2810% ( 10) 00:15:15.542 16443.578 - 16562.735: 81.4052% ( 12) 00:15:15.542 16562.735 - 16681.891: 81.5087% ( 10) 00:15:15.542 16681.891 - 16801.047: 81.8088% ( 29) 00:15:15.542 16801.047 - 16920.204: 82.1192% ( 30) 00:15:15.542 16920.204 - 17039.360: 82.4400% ( 31) 00:15:15.542 17039.360 - 17158.516: 82.7194% ( 27) 00:15:15.542 17158.516 - 17277.673: 83.0091% ( 28) 00:15:15.542 17277.673 - 17396.829: 83.3299% ( 31) 00:15:15.542 17396.829 - 17515.985: 83.5989% ( 26) 00:15:15.542 17515.985 - 17635.142: 83.8473% ( 24) 00:15:15.542 17635.142 - 17754.298: 84.1267% ( 27) 00:15:15.542 17754.298 - 17873.455: 84.5509% ( 41) 00:15:15.542 17873.455 - 17992.611: 84.9855% ( 42) 00:15:15.542 17992.611 - 18111.767: 85.4512% ( 45) 00:15:15.542 18111.767 - 18230.924: 85.9892% ( 52) 00:15:15.542 18230.924 - 18350.080: 86.5170% ( 51) 00:15:15.542 18350.080 - 18469.236: 87.1999% ( 66) 00:15:15.542 18469.236 - 18588.393: 87.8311% ( 61) 00:15:15.542 18588.393 - 18707.549: 88.5451% ( 69) 00:15:15.542 18707.549 - 18826.705: 89.3833% ( 81) 00:15:15.542 18826.705 - 18945.862: 90.0766% ( 67) 00:15:15.542 18945.862 - 19065.018: 90.7285% ( 63) 00:15:15.542 19065.018 - 19184.175: 91.2873% ( 54) 00:15:15.542 19184.175 - 19303.331: 91.8150% ( 51) 00:15:15.542 19303.331 - 19422.487: 92.3945% ( 56) 00:15:15.542 19422.487 - 19541.644: 92.9636% ( 55) 00:15:15.542 19541.644 - 19660.800: 93.4499% ( 47) 00:15:15.542 19660.800 - 19779.956: 94.0087% ( 54) 00:15:15.542 19779.956 - 19899.113: 94.4536% ( 43) 00:15:15.542 19899.113 - 20018.269: 94.8675% ( 40) 00:15:15.542 20018.269 - 20137.425: 95.2504% ( 37) 00:15:15.542 20137.425 - 20256.582: 95.6643% ( 40) 00:15:15.542 20256.582 - 20375.738: 96.0472% ( 37) 00:15:15.542 20375.738 - 20494.895: 96.4611% ( 40) 00:15:15.542 20494.895 - 20614.051: 96.8026% ( 33) 00:15:15.542 20614.051 - 20733.207: 97.1130% ( 30) 00:15:15.542 20733.207 - 20852.364: 97.4441% ( 32) 00:15:15.542 20852.364 - 20971.520: 97.6925% ( 24) 00:15:15.542 20971.520 - 21090.676: 97.9615% ( 26) 00:15:15.542 21090.676 - 21209.833: 98.1581% ( 19) 00:15:15.542 21209.833 - 21328.989: 98.3030% ( 14) 00:15:15.542 21328.989 - 21448.145: 98.4272% ( 12) 00:15:15.542 21448.145 - 21567.302: 98.5306% ( 10) 00:15:15.542 21567.302 - 21686.458: 98.5927% ( 6) 00:15:15.542 21686.458 - 21805.615: 98.6341% ( 4) 00:15:15.542 21805.615 - 21924.771: 98.6548% ( 2) 00:15:15.542 21924.771 - 22043.927: 98.6755% ( 2) 00:15:15.542 22878.022 - 22997.178: 98.6858% ( 1) 00:15:15.542 22997.178 - 23116.335: 98.7065% ( 2) 00:15:15.542 23116.335 - 23235.491: 98.7272% ( 2) 00:15:15.542 23235.491 - 23354.647: 98.7583% ( 3) 00:15:15.542 23354.647 - 23473.804: 98.7790% ( 2) 00:15:15.542 23473.804 - 23592.960: 98.8100% ( 3) 00:15:15.542 23592.960 - 23712.116: 98.8307% ( 2) 00:15:15.542 23712.116 - 23831.273: 98.8514% ( 2) 00:15:15.542 23831.273 - 23950.429: 98.8825% ( 3) 00:15:15.542 23950.429 - 24069.585: 98.9031% ( 2) 00:15:15.542 24069.585 - 24188.742: 98.9342% ( 3) 00:15:15.542 24188.742 - 24307.898: 98.9549% ( 2) 00:15:15.542 24307.898 - 24427.055: 98.9756% ( 2) 00:15:15.542 24427.055 - 24546.211: 99.0066% ( 3) 00:15:15.542 24546.211 - 24665.367: 99.0273% ( 2) 00:15:15.542 24665.367 - 24784.524: 99.0480% ( 2) 00:15:15.542 24784.524 - 24903.680: 99.0791% ( 3) 00:15:15.542 24903.680 - 25022.836: 99.0998% ( 2) 00:15:15.542 25022.836 - 25141.993: 99.1204% ( 2) 00:15:15.542 25141.993 - 25261.149: 99.1411% ( 2) 00:15:15.542 25261.149 - 25380.305: 99.1722% ( 3) 00:15:15.542 25380.305 - 25499.462: 99.1929% ( 2) 00:15:15.542 25499.462 - 25618.618: 99.2136% ( 2) 00:15:15.542 25618.618 - 25737.775: 99.2343% ( 2) 00:15:15.542 25737.775 - 25856.931: 99.2653% ( 3) 00:15:15.542 25856.931 - 25976.087: 99.2860% ( 2) 00:15:15.542 25976.087 - 26095.244: 99.3067% ( 2) 00:15:15.542 26095.244 - 26214.400: 99.3377% ( 3) 00:15:15.542 34555.345 - 34793.658: 99.3481% ( 1) 00:15:15.542 34793.658 - 35031.971: 99.3998% ( 5) 00:15:15.542 35031.971 - 35270.284: 99.4516% ( 5) 00:15:15.542 35270.284 - 35508.596: 99.4930% ( 4) 00:15:15.542 35508.596 - 35746.909: 99.5344% ( 4) 00:15:15.542 35746.909 - 35985.222: 99.5861% ( 5) 00:15:15.542 35985.222 - 36223.535: 99.6275% ( 4) 00:15:15.542 36223.535 - 36461.847: 99.6792% ( 5) 00:15:15.542 36461.847 - 36700.160: 99.7310% ( 5) 00:15:15.542 36700.160 - 36938.473: 99.7827% ( 5) 00:15:15.542 36938.473 - 37176.785: 99.8344% ( 5) 00:15:15.542 37176.785 - 37415.098: 99.8758% ( 4) 00:15:15.542 37415.098 - 37653.411: 99.9276% ( 5) 00:15:15.542 37653.411 - 37891.724: 99.9793% ( 5) 00:15:15.542 37891.724 - 38130.036: 100.0000% ( 2) 00:15:15.542 00:15:15.542 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:15:15.542 ============================================================================== 00:15:15.542 Range in us Cumulative IO count 00:15:15.542 9353.775 - 9413.353: 0.0103% ( 1) 00:15:15.542 9472.931 - 9532.509: 0.0207% ( 1) 00:15:15.542 9532.509 - 9592.087: 0.0724% ( 5) 00:15:15.542 9592.087 - 9651.665: 0.1656% ( 9) 00:15:15.542 9651.665 - 9711.244: 0.3518% ( 18) 00:15:15.542 9711.244 - 9770.822: 0.6002% ( 24) 00:15:15.542 9770.822 - 9830.400: 0.9520% ( 34) 00:15:15.542 9830.400 - 9889.978: 1.6246% ( 65) 00:15:15.542 9889.978 - 9949.556: 2.5041% ( 85) 00:15:15.542 9949.556 - 10009.135: 3.4458% ( 91) 00:15:15.542 10009.135 - 10068.713: 4.2322% ( 76) 00:15:15.542 10068.713 - 10128.291: 5.1014% ( 84) 00:15:15.542 10128.291 - 10187.869: 6.0741% ( 94) 00:15:15.542 10187.869 - 10247.447: 7.0882% ( 98) 00:15:15.542 10247.447 - 10307.025: 8.2678% ( 114) 00:15:15.542 10307.025 - 10366.604: 9.4164% ( 111) 00:15:15.542 10366.604 - 10426.182: 10.9478% ( 148) 00:15:15.542 10426.182 - 10485.760: 12.5724% ( 157) 00:15:15.542 10485.760 - 10545.338: 14.1556% ( 153) 00:15:15.542 10545.338 - 10604.916: 16.0079% ( 179) 00:15:15.542 10604.916 - 10664.495: 17.8808% ( 181) 00:15:15.542 10664.495 - 10724.073: 20.0331% ( 208) 00:15:15.542 10724.073 - 10783.651: 22.2165% ( 211) 00:15:15.542 10783.651 - 10843.229: 24.3377% ( 205) 00:15:15.542 10843.229 - 10902.807: 26.6349% ( 222) 00:15:15.542 10902.807 - 10962.385: 28.7252% ( 202) 00:15:15.542 10962.385 - 11021.964: 30.8361% ( 204) 00:15:15.542 11021.964 - 11081.542: 32.9470% ( 204) 00:15:15.542 11081.542 - 11141.120: 34.9855% ( 197) 00:15:15.542 11141.120 - 11200.698: 37.0654% ( 201) 00:15:15.542 11200.698 - 11260.276: 39.0728% ( 194) 00:15:15.542 11260.276 - 11319.855: 41.0493% ( 191) 00:15:15.542 11319.855 - 11379.433: 42.8808% ( 177) 00:15:15.542 11379.433 - 11439.011: 44.7951% ( 185) 00:15:15.542 11439.011 - 11498.589: 46.4404% ( 159) 00:15:15.542 11498.589 - 11558.167: 48.1271% ( 163) 00:15:15.542 11558.167 - 11617.745: 49.7620% ( 158) 00:15:15.542 11617.745 - 11677.324: 51.4487% ( 163) 00:15:15.542 11677.324 - 11736.902: 52.9801% ( 148) 00:15:15.542 11736.902 - 11796.480: 54.4805% ( 145) 00:15:15.542 11796.480 - 11856.058: 55.9396% ( 141) 00:15:15.542 11856.058 - 11915.636: 57.2537% ( 127) 00:15:15.542 11915.636 - 11975.215: 58.7024% ( 140) 00:15:15.542 11975.215 - 12034.793: 59.9131% ( 117) 00:15:15.542 12034.793 - 12094.371: 60.9272% ( 98) 00:15:15.542 12094.371 - 12153.949: 61.7757% ( 82) 00:15:15.542 12153.949 - 12213.527: 62.5207% ( 72) 00:15:15.542 12213.527 - 12273.105: 63.1933% ( 65) 00:15:15.542 12273.105 - 12332.684: 63.8452% ( 63) 00:15:15.542 12332.684 - 12392.262: 64.5281% ( 66) 00:15:15.542 12392.262 - 12451.840: 65.2835% ( 73) 00:15:15.542 12451.840 - 12511.418: 65.9665% ( 66) 00:15:15.542 12511.418 - 12570.996: 66.5770% ( 59) 00:15:15.542 12570.996 - 12630.575: 67.1358% ( 54) 00:15:15.542 12630.575 - 12690.153: 67.7463% ( 59) 00:15:15.542 12690.153 - 12749.731: 68.1705% ( 41) 00:15:15.542 12749.731 - 12809.309: 68.5844% ( 40) 00:15:15.542 12809.309 - 12868.887: 68.9776% ( 38) 00:15:15.542 12868.887 - 12928.465: 69.3398% ( 35) 00:15:15.542 12928.465 - 12988.044: 69.6502% ( 30) 00:15:15.542 12988.044 - 13047.622: 69.8882% ( 23) 00:15:15.542 13047.622 - 13107.200: 70.1573% ( 26) 00:15:15.542 13107.200 - 13166.778: 70.3849% ( 22) 00:15:15.542 13166.778 - 13226.356: 70.6229% ( 23) 00:15:15.542 13226.356 - 13285.935: 70.9541% ( 32) 00:15:15.542 13285.935 - 13345.513: 71.2645% ( 30) 00:15:15.542 13345.513 - 13405.091: 71.6991% ( 42) 00:15:15.542 13405.091 - 13464.669: 72.1544% ( 44) 00:15:15.542 13464.669 - 13524.247: 72.5373% ( 37) 00:15:15.542 13524.247 - 13583.825: 72.9615% ( 41) 00:15:15.542 13583.825 - 13643.404: 73.2616% ( 29) 00:15:15.542 13643.404 - 13702.982: 73.6031% ( 33) 00:15:15.542 13702.982 - 13762.560: 74.0170% ( 40) 00:15:15.542 13762.560 - 13822.138: 74.4205% ( 39) 00:15:15.542 13822.138 - 13881.716: 74.7827% ( 35) 00:15:15.542 13881.716 - 13941.295: 75.1966% ( 40) 00:15:15.542 13941.295 - 14000.873: 75.6623% ( 45) 00:15:15.542 14000.873 - 14060.451: 76.0348% ( 36) 00:15:15.542 14060.451 - 14120.029: 76.3142% ( 27) 00:15:15.542 14120.029 - 14179.607: 76.6453% ( 32) 00:15:15.542 14179.607 - 14239.185: 76.9350% ( 28) 00:15:15.542 14239.185 - 14298.764: 77.2144% ( 27) 00:15:15.542 14298.764 - 14358.342: 77.4317% ( 21) 00:15:15.542 14358.342 - 14417.920: 77.6697% ( 23) 00:15:15.542 14417.920 - 14477.498: 77.8663% ( 19) 00:15:15.542 14477.498 - 14537.076: 78.0112% ( 14) 00:15:15.542 14537.076 - 14596.655: 78.2078% ( 19) 00:15:15.542 14596.655 - 14656.233: 78.3526% ( 14) 00:15:15.542 14656.233 - 14715.811: 78.5182% ( 16) 00:15:15.542 14715.811 - 14775.389: 78.6734% ( 15) 00:15:15.542 14775.389 - 14834.967: 78.8183% ( 14) 00:15:15.543 14834.967 - 14894.545: 78.9735% ( 15) 00:15:15.543 14894.545 - 14954.124: 79.0873% ( 11) 00:15:15.543 14954.124 - 15013.702: 79.2219% ( 13) 00:15:15.543 15013.702 - 15073.280: 79.3046% ( 8) 00:15:15.543 15073.280 - 15132.858: 79.3771% ( 7) 00:15:15.543 15132.858 - 15192.436: 79.5012% ( 12) 00:15:15.543 15192.436 - 15252.015: 79.6151% ( 11) 00:15:15.543 15252.015 - 15371.171: 79.7806% ( 16) 00:15:15.543 15371.171 - 15490.327: 79.9255% ( 14) 00:15:15.543 15490.327 - 15609.484: 80.0393% ( 11) 00:15:15.543 15609.484 - 15728.640: 80.1635% ( 12) 00:15:15.543 15728.640 - 15847.796: 80.3601% ( 19) 00:15:15.543 15847.796 - 15966.953: 80.5153% ( 15) 00:15:15.543 15966.953 - 16086.109: 80.6602% ( 14) 00:15:15.543 16086.109 - 16205.265: 80.9499% ( 28) 00:15:15.543 16205.265 - 16324.422: 81.1362% ( 18) 00:15:15.543 16324.422 - 16443.578: 81.3431% ( 20) 00:15:15.543 16443.578 - 16562.735: 81.5087% ( 16) 00:15:15.543 16562.735 - 16681.891: 81.7156% ( 20) 00:15:15.543 16681.891 - 16801.047: 82.0364% ( 31) 00:15:15.543 16801.047 - 16920.204: 82.2848% ( 24) 00:15:15.543 16920.204 - 17039.360: 82.5745% ( 28) 00:15:15.543 17039.360 - 17158.516: 82.8435% ( 26) 00:15:15.543 17158.516 - 17277.673: 83.1850% ( 33) 00:15:15.543 17277.673 - 17396.829: 83.4954% ( 30) 00:15:15.543 17396.829 - 17515.985: 83.7231% ( 22) 00:15:15.543 17515.985 - 17635.142: 83.9094% ( 18) 00:15:15.543 17635.142 - 17754.298: 84.0956% ( 18) 00:15:15.543 17754.298 - 17873.455: 84.4060% ( 30) 00:15:15.543 17873.455 - 17992.611: 84.8200% ( 40) 00:15:15.543 17992.611 - 18111.767: 85.3477% ( 51) 00:15:15.543 18111.767 - 18230.924: 85.8961% ( 53) 00:15:15.543 18230.924 - 18350.080: 86.4445% ( 53) 00:15:15.543 18350.080 - 18469.236: 87.1171% ( 65) 00:15:15.543 18469.236 - 18588.393: 87.6863% ( 55) 00:15:15.543 18588.393 - 18707.549: 88.3692% ( 66) 00:15:15.543 18707.549 - 18826.705: 89.0108% ( 62) 00:15:15.543 18826.705 - 18945.862: 89.6937% ( 66) 00:15:15.543 18945.862 - 19065.018: 90.3767% ( 66) 00:15:15.543 19065.018 - 19184.175: 91.1010% ( 70) 00:15:15.543 19184.175 - 19303.331: 91.6908% ( 57) 00:15:15.543 19303.331 - 19422.487: 92.2496% ( 54) 00:15:15.543 19422.487 - 19541.644: 92.8498% ( 58) 00:15:15.543 19541.644 - 19660.800: 93.3464% ( 48) 00:15:15.543 19660.800 - 19779.956: 93.8742% ( 51) 00:15:15.543 19779.956 - 19899.113: 94.3709% ( 48) 00:15:15.543 19899.113 - 20018.269: 94.8469% ( 46) 00:15:15.543 20018.269 - 20137.425: 95.2297% ( 37) 00:15:15.543 20137.425 - 20256.582: 95.6333% ( 39) 00:15:15.543 20256.582 - 20375.738: 96.1093% ( 46) 00:15:15.543 20375.738 - 20494.895: 96.5749% ( 45) 00:15:15.543 20494.895 - 20614.051: 96.9474% ( 36) 00:15:15.543 20614.051 - 20733.207: 97.2682% ( 31) 00:15:15.543 20733.207 - 20852.364: 97.5683% ( 29) 00:15:15.543 20852.364 - 20971.520: 97.8373% ( 26) 00:15:15.543 20971.520 - 21090.676: 98.0753% ( 23) 00:15:15.543 21090.676 - 21209.833: 98.2512% ( 17) 00:15:15.543 21209.833 - 21328.989: 98.3547% ( 10) 00:15:15.543 21328.989 - 21448.145: 98.5306% ( 17) 00:15:15.543 21448.145 - 21567.302: 98.6548% ( 12) 00:15:15.543 21567.302 - 21686.458: 98.7479% ( 9) 00:15:15.543 21686.458 - 21805.615: 98.7997% ( 5) 00:15:15.543 21805.615 - 21924.771: 98.8204% ( 2) 00:15:15.543 21924.771 - 22043.927: 98.8514% ( 3) 00:15:15.543 22043.927 - 22163.084: 98.8721% ( 2) 00:15:15.543 22163.084 - 22282.240: 98.9031% ( 3) 00:15:15.543 22282.240 - 22401.396: 98.9238% ( 2) 00:15:15.543 22401.396 - 22520.553: 98.9549% ( 3) 00:15:15.543 22520.553 - 22639.709: 98.9756% ( 2) 00:15:15.543 22639.709 - 22758.865: 98.9963% ( 2) 00:15:15.543 22758.865 - 22878.022: 99.0170% ( 2) 00:15:15.543 22878.022 - 22997.178: 99.0480% ( 3) 00:15:15.543 22997.178 - 23116.335: 99.0687% ( 2) 00:15:15.543 23116.335 - 23235.491: 99.0894% ( 2) 00:15:15.543 23235.491 - 23354.647: 99.1204% ( 3) 00:15:15.543 23354.647 - 23473.804: 99.1411% ( 2) 00:15:15.543 23473.804 - 23592.960: 99.1722% ( 3) 00:15:15.543 23592.960 - 23712.116: 99.1929% ( 2) 00:15:15.543 23712.116 - 23831.273: 99.2136% ( 2) 00:15:15.543 23831.273 - 23950.429: 99.2343% ( 2) 00:15:15.543 23950.429 - 24069.585: 99.2550% ( 2) 00:15:15.543 24069.585 - 24188.742: 99.2860% ( 3) 00:15:15.543 24188.742 - 24307.898: 99.3067% ( 2) 00:15:15.543 24307.898 - 24427.055: 99.3274% ( 2) 00:15:15.543 24427.055 - 24546.211: 99.3377% ( 1) 00:15:15.543 32887.156 - 33125.469: 99.3791% ( 4) 00:15:15.543 33125.469 - 33363.782: 99.4205% ( 4) 00:15:15.543 33363.782 - 33602.095: 99.4723% ( 5) 00:15:15.543 33602.095 - 33840.407: 99.5137% ( 4) 00:15:15.543 33840.407 - 34078.720: 99.5654% ( 5) 00:15:15.543 34078.720 - 34317.033: 99.6171% ( 5) 00:15:15.543 34317.033 - 34555.345: 99.6689% ( 5) 00:15:15.543 34555.345 - 34793.658: 99.7206% ( 5) 00:15:15.543 34793.658 - 35031.971: 99.7620% ( 4) 00:15:15.543 35031.971 - 35270.284: 99.8137% ( 5) 00:15:15.543 35270.284 - 35508.596: 99.8551% ( 4) 00:15:15.543 35508.596 - 35746.909: 99.9069% ( 5) 00:15:15.543 35746.909 - 35985.222: 99.9483% ( 4) 00:15:15.543 35985.222 - 36223.535: 100.0000% ( 5) 00:15:15.543 00:15:15.543 13:09:21 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:15:15.543 00:15:15.543 real 0m2.738s 00:15:15.543 user 0m2.313s 00:15:15.543 sys 0m0.310s 00:15:15.543 13:09:21 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.543 13:09:21 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:15:15.543 ************************************ 00:15:15.543 END TEST nvme_perf 00:15:15.543 ************************************ 00:15:15.543 13:09:21 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:15:15.543 13:09:21 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:15.543 13:09:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.543 13:09:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.543 ************************************ 00:15:15.543 START TEST nvme_hello_world 00:15:15.543 ************************************ 00:15:15.543 13:09:21 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:15:15.802 Initializing NVMe Controllers 00:15:15.802 Attached to 0000:00:10.0 00:15:15.802 Namespace ID: 1 size: 6GB 00:15:15.802 Attached to 0000:00:11.0 00:15:15.802 Namespace ID: 1 size: 5GB 00:15:15.802 Attached to 0000:00:13.0 00:15:15.802 Namespace ID: 1 size: 1GB 00:15:15.802 Attached to 0000:00:12.0 00:15:15.802 Namespace ID: 1 size: 4GB 00:15:15.802 Namespace ID: 2 size: 4GB 00:15:15.802 Namespace ID: 3 size: 4GB 00:15:15.802 Initialization complete. 00:15:15.802 INFO: using host memory buffer for IO 00:15:15.802 Hello world! 00:15:15.802 INFO: using host memory buffer for IO 00:15:15.802 Hello world! 00:15:15.802 INFO: using host memory buffer for IO 00:15:15.802 Hello world! 00:15:15.802 INFO: using host memory buffer for IO 00:15:15.802 Hello world! 00:15:15.802 INFO: using host memory buffer for IO 00:15:15.802 Hello world! 00:15:15.802 INFO: using host memory buffer for IO 00:15:15.802 Hello world! 00:15:15.802 ************************************ 00:15:15.802 END TEST nvme_hello_world 00:15:15.802 ************************************ 00:15:15.802 00:15:15.802 real 0m0.343s 00:15:15.802 user 0m0.144s 00:15:15.802 sys 0m0.148s 00:15:15.802 13:09:22 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.802 13:09:22 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:15.802 13:09:22 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:15:15.802 13:09:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:15.802 13:09:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.802 13:09:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.802 ************************************ 00:15:15.802 START TEST nvme_sgl 00:15:15.802 ************************************ 00:15:15.802 13:09:22 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:15:16.060 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:15:16.060 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:15:16.060 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:15:16.060 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:15:16.060 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:15:16.060 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:15:16.060 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:15:16.060 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:15:16.060 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:15:16.325 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:15:16.325 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:15:16.325 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:15:16.325 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:15:16.325 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:15:16.325 NVMe Readv/Writev Request test 00:15:16.325 Attached to 0000:00:10.0 00:15:16.325 Attached to 0000:00:11.0 00:15:16.325 Attached to 0000:00:13.0 00:15:16.325 Attached to 0000:00:12.0 00:15:16.325 0000:00:10.0: build_io_request_2 test passed 00:15:16.325 0000:00:10.0: build_io_request_4 test passed 00:15:16.325 0000:00:10.0: build_io_request_5 test passed 00:15:16.325 0000:00:10.0: build_io_request_6 test passed 00:15:16.325 0000:00:10.0: build_io_request_7 test passed 00:15:16.325 0000:00:10.0: build_io_request_10 test passed 00:15:16.325 0000:00:11.0: build_io_request_2 test passed 00:15:16.326 0000:00:11.0: build_io_request_4 test passed 00:15:16.326 0000:00:11.0: build_io_request_5 test passed 00:15:16.326 0000:00:11.0: build_io_request_6 test passed 00:15:16.326 0000:00:11.0: build_io_request_7 test passed 00:15:16.326 0000:00:11.0: build_io_request_10 test passed 00:15:16.326 Cleaning up... 00:15:16.326 00:15:16.326 real 0m0.458s 00:15:16.326 user 0m0.244s 00:15:16.326 sys 0m0.164s 00:15:16.326 13:09:22 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.326 ************************************ 00:15:16.326 END TEST nvme_sgl 00:15:16.326 ************************************ 00:15:16.326 13:09:22 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:15:16.326 13:09:22 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:15:16.326 13:09:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:16.326 13:09:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.326 13:09:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.326 ************************************ 00:15:16.326 START TEST nvme_e2edp 00:15:16.326 ************************************ 00:15:16.326 13:09:22 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:15:16.584 NVMe Write/Read with End-to-End data protection test 00:15:16.584 Attached to 0000:00:10.0 00:15:16.584 Attached to 0000:00:11.0 00:15:16.584 Attached to 0000:00:13.0 00:15:16.584 Attached to 0000:00:12.0 00:15:16.584 Cleaning up... 00:15:16.584 00:15:16.584 real 0m0.368s 00:15:16.584 user 0m0.124s 00:15:16.584 sys 0m0.186s 00:15:16.584 13:09:23 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.584 13:09:23 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:15:16.584 ************************************ 00:15:16.584 END TEST nvme_e2edp 00:15:16.584 ************************************ 00:15:16.843 13:09:23 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:15:16.843 13:09:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:16.843 13:09:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.843 13:09:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.843 ************************************ 00:15:16.843 START TEST nvme_reserve 00:15:16.843 ************************************ 00:15:16.843 13:09:23 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:15:17.102 ===================================================== 00:15:17.102 NVMe Controller at PCI bus 0, device 16, function 0 00:15:17.102 ===================================================== 00:15:17.102 Reservations: Not Supported 00:15:17.102 ===================================================== 00:15:17.102 NVMe Controller at PCI bus 0, device 17, function 0 00:15:17.102 ===================================================== 00:15:17.102 Reservations: Not Supported 00:15:17.102 ===================================================== 00:15:17.102 NVMe Controller at PCI bus 0, device 19, function 0 00:15:17.102 ===================================================== 00:15:17.102 Reservations: Not Supported 00:15:17.102 ===================================================== 00:15:17.102 NVMe Controller at PCI bus 0, device 18, function 0 00:15:17.102 ===================================================== 00:15:17.102 Reservations: Not Supported 00:15:17.102 Reservation test passed 00:15:17.102 00:15:17.102 real 0m0.324s 00:15:17.102 user 0m0.116s 00:15:17.102 sys 0m0.164s 00:15:17.102 ************************************ 00:15:17.102 END TEST nvme_reserve 00:15:17.102 ************************************ 00:15:17.102 13:09:23 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.102 13:09:23 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:15:17.102 13:09:23 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:15:17.102 13:09:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:17.102 13:09:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.102 13:09:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.102 ************************************ 00:15:17.102 START TEST nvme_err_injection 00:15:17.102 ************************************ 00:15:17.102 13:09:23 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:15:17.360 NVMe Error Injection test 00:15:17.360 Attached to 0000:00:10.0 00:15:17.360 Attached to 0000:00:11.0 00:15:17.360 Attached to 0000:00:13.0 00:15:17.361 Attached to 0000:00:12.0 00:15:17.361 0000:00:10.0: get features failed as expected 00:15:17.361 0000:00:11.0: get features failed as expected 00:15:17.361 0000:00:13.0: get features failed as expected 00:15:17.361 0000:00:12.0: get features failed as expected 00:15:17.361 0000:00:10.0: get features successfully as expected 00:15:17.361 0000:00:11.0: get features successfully as expected 00:15:17.361 0000:00:13.0: get features successfully as expected 00:15:17.361 0000:00:12.0: get features successfully as expected 00:15:17.361 0000:00:10.0: read failed as expected 00:15:17.361 0000:00:11.0: read failed as expected 00:15:17.361 0000:00:13.0: read failed as expected 00:15:17.361 0000:00:12.0: read failed as expected 00:15:17.361 0000:00:10.0: read successfully as expected 00:15:17.361 0000:00:11.0: read successfully as expected 00:15:17.361 0000:00:13.0: read successfully as expected 00:15:17.361 0000:00:12.0: read successfully as expected 00:15:17.361 Cleaning up... 00:15:17.361 ************************************ 00:15:17.361 END TEST nvme_err_injection 00:15:17.361 ************************************ 00:15:17.361 00:15:17.361 real 0m0.340s 00:15:17.361 user 0m0.145s 00:15:17.361 sys 0m0.147s 00:15:17.361 13:09:23 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.361 13:09:23 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:15:17.618 13:09:23 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:15:17.618 13:09:23 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:15:17.618 13:09:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.618 13:09:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.618 ************************************ 00:15:17.618 START TEST nvme_overhead 00:15:17.618 ************************************ 00:15:17.618 13:09:23 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:15:18.991 Initializing NVMe Controllers 00:15:18.991 Attached to 0000:00:10.0 00:15:18.991 Attached to 0000:00:11.0 00:15:18.991 Attached to 0000:00:13.0 00:15:18.991 Attached to 0000:00:12.0 00:15:18.991 Initialization complete. Launching workers. 00:15:18.991 submit (in ns) avg, min, max = 17133.3, 14271.8, 163265.0 00:15:18.991 complete (in ns) avg, min, max = 11428.8, 9267.3, 125481.8 00:15:18.991 00:15:18.991 Submit histogram 00:15:18.991 ================ 00:15:18.991 Range in us Cumulative Count 00:15:18.991 14.255 - 14.313: 0.0544% ( 6) 00:15:18.991 14.313 - 14.371: 0.2176% ( 18) 00:15:18.991 14.371 - 14.429: 0.8252% ( 67) 00:15:18.991 14.429 - 14.487: 2.5390% ( 189) 00:15:18.991 14.487 - 14.545: 5.9213% ( 373) 00:15:18.991 14.545 - 14.604: 12.0874% ( 680) 00:15:18.991 14.604 - 14.662: 18.9336% ( 755) 00:15:18.991 14.662 - 14.720: 25.2267% ( 694) 00:15:18.991 14.720 - 14.778: 29.3888% ( 459) 00:15:18.991 14.778 - 14.836: 31.8281% ( 269) 00:15:18.991 14.836 - 14.895: 33.2789% ( 160) 00:15:18.991 14.895 - 15.011: 35.2557% ( 218) 00:15:18.991 15.011 - 15.127: 36.7519% ( 165) 00:15:18.991 15.127 - 15.244: 38.0939% ( 148) 00:15:18.991 15.244 - 15.360: 39.0642% ( 107) 00:15:18.991 15.360 - 15.476: 39.9619% ( 99) 00:15:18.991 15.476 - 15.593: 40.4062% ( 49) 00:15:18.991 15.593 - 15.709: 40.6601% ( 28) 00:15:18.991 15.709 - 15.825: 40.8687% ( 23) 00:15:18.991 15.825 - 15.942: 41.0047% ( 15) 00:15:18.991 15.942 - 16.058: 41.1317% ( 14) 00:15:18.991 16.058 - 16.175: 41.2133% ( 9) 00:15:18.991 16.175 - 16.291: 41.2949% ( 9) 00:15:18.991 16.291 - 16.407: 41.3674% ( 8) 00:15:18.991 16.407 - 16.524: 41.4853% ( 13) 00:15:18.991 16.524 - 16.640: 41.5851% ( 11) 00:15:18.991 16.640 - 16.756: 41.6757% ( 10) 00:15:18.991 16.756 - 16.873: 41.7392% ( 7) 00:15:18.991 16.873 - 16.989: 41.8480% ( 12) 00:15:18.991 16.989 - 17.105: 42.4737% ( 69) 00:15:18.991 17.105 - 17.222: 45.8832% ( 376) 00:15:18.991 17.222 - 17.338: 54.5248% ( 953) 00:15:18.991 17.338 - 17.455: 63.9191% ( 1036) 00:15:18.991 17.455 - 17.571: 72.4157% ( 937) 00:15:18.991 17.571 - 17.687: 77.7929% ( 593) 00:15:18.991 17.687 - 17.804: 80.5404% ( 303) 00:15:18.991 17.804 - 17.920: 82.3721% ( 202) 00:15:18.991 17.920 - 18.036: 83.6326% ( 139) 00:15:18.991 18.036 - 18.153: 84.6754% ( 115) 00:15:18.991 18.153 - 18.269: 86.1172% ( 159) 00:15:18.991 18.269 - 18.385: 87.3322% ( 134) 00:15:18.991 18.385 - 18.502: 88.1756% ( 93) 00:15:18.991 18.502 - 18.618: 89.0189% ( 93) 00:15:18.991 18.618 - 18.735: 89.5539% ( 59) 00:15:18.991 18.735 - 18.851: 89.8622% ( 34) 00:15:18.991 18.851 - 18.967: 90.0345% ( 19) 00:15:18.991 18.967 - 19.084: 90.2430% ( 23) 00:15:18.991 19.084 - 19.200: 90.4062% ( 18) 00:15:18.991 19.200 - 19.316: 90.6148% ( 23) 00:15:18.991 19.316 - 19.433: 90.8596% ( 27) 00:15:18.991 19.433 - 19.549: 90.9866% ( 14) 00:15:18.991 19.549 - 19.665: 91.1770% ( 21) 00:15:18.991 19.665 - 19.782: 91.3765% ( 22) 00:15:18.991 19.782 - 19.898: 91.6032% ( 25) 00:15:18.991 19.898 - 20.015: 91.7483% ( 16) 00:15:18.991 20.015 - 20.131: 91.9387% ( 21) 00:15:18.991 20.131 - 20.247: 92.0384% ( 11) 00:15:18.991 20.247 - 20.364: 92.1473% ( 12) 00:15:18.991 20.364 - 20.480: 92.2289% ( 9) 00:15:18.991 20.480 - 20.596: 92.3468% ( 13) 00:15:18.991 20.596 - 20.713: 92.3830% ( 4) 00:15:18.991 20.713 - 20.829: 92.4284% ( 5) 00:15:18.991 20.829 - 20.945: 92.4737% ( 5) 00:15:18.991 20.945 - 21.062: 92.5462% ( 8) 00:15:18.991 21.062 - 21.178: 92.6188% ( 8) 00:15:18.991 21.178 - 21.295: 92.7095% ( 10) 00:15:18.991 21.295 - 21.411: 92.8183% ( 12) 00:15:18.991 21.411 - 21.527: 92.9090% ( 10) 00:15:18.991 21.527 - 21.644: 92.9906% ( 9) 00:15:18.991 21.644 - 21.760: 93.0903% ( 11) 00:15:18.991 21.760 - 21.876: 93.1719% ( 9) 00:15:18.991 21.876 - 21.993: 93.2354% ( 7) 00:15:18.991 21.993 - 22.109: 93.3442% ( 12) 00:15:18.991 22.109 - 22.225: 93.4440% ( 11) 00:15:18.991 22.225 - 22.342: 93.5165% ( 8) 00:15:18.991 22.342 - 22.458: 93.5618% ( 5) 00:15:18.991 22.458 - 22.575: 93.6344% ( 8) 00:15:18.991 22.575 - 22.691: 93.6888% ( 6) 00:15:18.991 22.691 - 22.807: 93.8429% ( 17) 00:15:18.991 22.807 - 22.924: 93.9880% ( 16) 00:15:18.991 22.924 - 23.040: 94.1603% ( 19) 00:15:18.991 23.040 - 23.156: 94.2873% ( 14) 00:15:18.991 23.156 - 23.273: 94.3417% ( 6) 00:15:18.991 23.273 - 23.389: 94.4052% ( 7) 00:15:18.991 23.389 - 23.505: 94.5049% ( 11) 00:15:18.991 23.505 - 23.622: 94.5684% ( 7) 00:15:18.991 23.622 - 23.738: 94.6500% ( 9) 00:15:18.991 23.738 - 23.855: 94.7497% ( 11) 00:15:18.991 23.855 - 23.971: 94.8223% ( 8) 00:15:18.991 23.971 - 24.087: 94.9311% ( 12) 00:15:18.991 24.087 - 24.204: 95.0762% ( 16) 00:15:18.991 24.204 - 24.320: 95.2031% ( 14) 00:15:18.991 24.320 - 24.436: 95.4117% ( 23) 00:15:18.991 24.436 - 24.553: 95.5477% ( 15) 00:15:18.991 24.553 - 24.669: 95.6474% ( 11) 00:15:18.991 24.669 - 24.785: 95.8107% ( 18) 00:15:18.991 24.785 - 24.902: 95.9467% ( 15) 00:15:18.991 24.902 - 25.018: 96.0646% ( 13) 00:15:18.991 25.018 - 25.135: 96.2096% ( 16) 00:15:18.991 25.135 - 25.251: 96.3003% ( 10) 00:15:18.991 25.251 - 25.367: 96.4363% ( 15) 00:15:18.991 25.367 - 25.484: 96.5996% ( 18) 00:15:18.991 25.484 - 25.600: 96.7628% ( 18) 00:15:18.991 25.600 - 25.716: 96.8807% ( 13) 00:15:18.991 25.716 - 25.833: 96.9895% ( 12) 00:15:18.991 25.833 - 25.949: 97.0892% ( 11) 00:15:18.991 25.949 - 26.065: 97.1890% ( 11) 00:15:18.991 26.065 - 26.182: 97.2887% ( 11) 00:15:18.991 26.182 - 26.298: 97.3703% ( 9) 00:15:18.991 26.298 - 26.415: 97.4338% ( 7) 00:15:18.991 26.415 - 26.531: 97.5063% ( 8) 00:15:18.991 26.531 - 26.647: 97.5698% ( 7) 00:15:18.991 26.647 - 26.764: 97.6696% ( 11) 00:15:18.991 26.764 - 26.880: 97.8147% ( 16) 00:15:18.991 26.880 - 26.996: 97.8963% ( 9) 00:15:18.991 26.996 - 27.113: 98.0051% ( 12) 00:15:18.991 27.113 - 27.229: 98.0958% ( 10) 00:15:18.991 27.229 - 27.345: 98.1683% ( 8) 00:15:18.991 27.345 - 27.462: 98.2318% ( 7) 00:15:18.991 27.462 - 27.578: 98.2590% ( 3) 00:15:18.991 27.578 - 27.695: 98.3678% ( 12) 00:15:18.991 27.695 - 27.811: 98.4403% ( 8) 00:15:18.991 27.811 - 27.927: 98.4766% ( 4) 00:15:18.991 27.927 - 28.044: 98.5219% ( 5) 00:15:18.991 28.044 - 28.160: 98.5491% ( 3) 00:15:18.991 28.160 - 28.276: 98.6308% ( 9) 00:15:18.991 28.276 - 28.393: 98.6670% ( 4) 00:15:18.991 28.393 - 28.509: 98.7305% ( 7) 00:15:18.991 28.509 - 28.625: 98.7758% ( 5) 00:15:18.991 28.625 - 28.742: 98.7940% ( 2) 00:15:18.991 28.742 - 28.858: 98.8303% ( 4) 00:15:18.991 28.858 - 28.975: 98.8847% ( 6) 00:15:18.991 28.975 - 29.091: 98.9753% ( 10) 00:15:18.991 29.091 - 29.207: 99.0207% ( 5) 00:15:18.991 29.207 - 29.324: 99.0569% ( 4) 00:15:18.991 29.324 - 29.440: 99.1114% ( 6) 00:15:18.991 29.440 - 29.556: 99.1567% ( 5) 00:15:18.991 29.556 - 29.673: 99.1930% ( 4) 00:15:18.991 29.673 - 29.789: 99.2383% ( 5) 00:15:18.991 29.789 - 30.022: 99.2746% ( 4) 00:15:18.991 30.022 - 30.255: 99.3471% ( 8) 00:15:18.991 30.255 - 30.487: 99.3743% ( 3) 00:15:18.991 30.720 - 30.953: 99.4015% ( 3) 00:15:18.991 30.953 - 31.185: 99.4197% ( 2) 00:15:18.991 31.185 - 31.418: 99.4469% ( 3) 00:15:18.991 31.418 - 31.651: 99.4922% ( 5) 00:15:18.991 31.651 - 31.884: 99.5013% ( 1) 00:15:18.991 31.884 - 32.116: 99.5103% ( 1) 00:15:18.991 32.116 - 32.349: 99.5194% ( 1) 00:15:18.991 32.349 - 32.582: 99.5375% ( 2) 00:15:18.991 32.582 - 32.815: 99.5557% ( 2) 00:15:18.991 33.280 - 33.513: 99.5647% ( 1) 00:15:18.991 33.513 - 33.745: 99.5738% ( 1) 00:15:18.991 33.745 - 33.978: 99.5829% ( 1) 00:15:18.991 33.978 - 34.211: 99.6010% ( 2) 00:15:18.991 34.211 - 34.444: 99.6373% ( 4) 00:15:18.991 34.444 - 34.676: 99.6554% ( 2) 00:15:18.991 34.676 - 34.909: 99.6645% ( 1) 00:15:18.991 35.142 - 35.375: 99.6736% ( 1) 00:15:18.991 35.375 - 35.607: 99.6826% ( 1) 00:15:18.991 35.607 - 35.840: 99.6917% ( 1) 00:15:18.991 36.073 - 36.305: 99.7008% ( 1) 00:15:18.991 36.305 - 36.538: 99.7098% ( 1) 00:15:18.991 36.538 - 36.771: 99.7189% ( 1) 00:15:18.991 36.771 - 37.004: 99.7280% ( 1) 00:15:18.991 37.236 - 37.469: 99.7370% ( 1) 00:15:18.991 37.469 - 37.702: 99.7552% ( 2) 00:15:18.991 37.702 - 37.935: 99.7642% ( 1) 00:15:18.991 37.935 - 38.167: 99.7824% ( 2) 00:15:18.991 38.400 - 38.633: 99.7914% ( 1) 00:15:18.991 38.865 - 39.098: 99.8005% ( 1) 00:15:18.991 40.029 - 40.262: 99.8096% ( 1) 00:15:18.991 40.495 - 40.727: 99.8186% ( 1) 00:15:18.991 40.727 - 40.960: 99.8368% ( 2) 00:15:18.991 40.960 - 41.193: 99.8549% ( 2) 00:15:18.991 41.193 - 41.425: 99.8640% ( 1) 00:15:18.991 41.891 - 42.124: 99.8731% ( 1) 00:15:18.991 42.124 - 42.356: 99.8821% ( 1) 00:15:18.991 42.822 - 43.055: 99.8912% ( 1) 00:15:18.991 43.287 - 43.520: 99.9003% ( 1) 00:15:18.991 43.753 - 43.985: 99.9093% ( 1) 00:15:18.992 46.778 - 47.011: 99.9184% ( 1) 00:15:18.992 48.407 - 48.640: 99.9275% ( 1) 00:15:18.992 50.735 - 50.967: 99.9365% ( 1) 00:15:18.992 57.484 - 57.716: 99.9456% ( 1) 00:15:18.992 83.316 - 83.782: 99.9547% ( 1) 00:15:18.992 88.902 - 89.367: 99.9728% ( 2) 00:15:18.992 96.349 - 96.815: 99.9819% ( 1) 00:15:18.992 104.727 - 105.193: 99.9909% ( 1) 00:15:18.992 162.909 - 163.840: 100.0000% ( 1) 00:15:18.992 00:15:18.992 Complete histogram 00:15:18.992 ================== 00:15:18.992 Range in us Cumulative Count 00:15:18.992 9.251 - 9.309: 0.1360% ( 15) 00:15:18.992 9.309 - 9.367: 1.7773% ( 181) 00:15:18.992 9.367 - 9.425: 6.2749% ( 496) 00:15:18.992 9.425 - 9.484: 14.4541% ( 902) 00:15:18.992 9.484 - 9.542: 21.9260% ( 824) 00:15:18.992 9.542 - 9.600: 28.0105% ( 671) 00:15:18.992 9.600 - 9.658: 32.1908% ( 461) 00:15:18.992 9.658 - 9.716: 34.4487% ( 249) 00:15:18.992 9.716 - 9.775: 36.0718% ( 179) 00:15:18.992 9.775 - 9.833: 37.2234% ( 127) 00:15:18.992 9.833 - 9.891: 38.0849% ( 95) 00:15:18.992 9.891 - 9.949: 38.5836% ( 55) 00:15:18.992 9.949 - 10.007: 39.0279% ( 49) 00:15:18.992 10.007 - 10.065: 39.2728% ( 27) 00:15:18.992 10.065 - 10.124: 39.4813% ( 23) 00:15:18.992 10.124 - 10.182: 39.6173% ( 15) 00:15:18.992 10.182 - 10.240: 39.7262% ( 12) 00:15:18.992 10.240 - 10.298: 39.7896% ( 7) 00:15:18.992 10.298 - 10.356: 39.8350% ( 5) 00:15:18.992 10.356 - 10.415: 39.8984% ( 7) 00:15:18.992 10.415 - 10.473: 39.9982% ( 11) 00:15:18.992 10.473 - 10.531: 40.2430% ( 27) 00:15:18.992 10.531 - 10.589: 40.4969% ( 28) 00:15:18.992 10.589 - 10.647: 40.7871% ( 32) 00:15:18.992 10.647 - 10.705: 41.1317% ( 38) 00:15:18.992 10.705 - 10.764: 41.5306% ( 44) 00:15:18.992 10.764 - 10.822: 41.8027% ( 30) 00:15:18.992 10.822 - 10.880: 41.9840% ( 20) 00:15:18.992 10.880 - 10.938: 42.1473% ( 18) 00:15:18.992 10.938 - 10.996: 42.2017% ( 6) 00:15:18.992 10.996 - 11.055: 42.2651% ( 7) 00:15:18.992 11.055 - 11.113: 42.3377% ( 8) 00:15:18.992 11.113 - 11.171: 42.3830% ( 5) 00:15:18.992 11.171 - 11.229: 42.4465% ( 7) 00:15:18.992 11.229 - 11.287: 42.5462% ( 11) 00:15:18.992 11.287 - 11.345: 42.8455% ( 33) 00:15:18.992 11.345 - 11.404: 44.2873% ( 159) 00:15:18.992 11.404 - 11.462: 47.7330% ( 380) 00:15:18.992 11.462 - 11.520: 54.3072% ( 725) 00:15:18.992 11.520 - 11.578: 62.1055% ( 860) 00:15:18.992 11.578 - 11.636: 69.9039% ( 860) 00:15:18.992 11.636 - 11.695: 75.7798% ( 648) 00:15:18.992 11.695 - 11.753: 79.8785% ( 452) 00:15:18.992 11.753 - 11.811: 82.1454% ( 250) 00:15:18.992 11.811 - 11.869: 83.3515% ( 133) 00:15:18.992 11.869 - 11.927: 84.0044% ( 72) 00:15:18.992 11.927 - 11.985: 84.6119% ( 67) 00:15:18.992 11.985 - 12.044: 85.1741% ( 62) 00:15:18.992 12.044 - 12.102: 85.6547% ( 53) 00:15:18.992 12.102 - 12.160: 85.9086% ( 28) 00:15:18.992 12.160 - 12.218: 86.2713% ( 40) 00:15:18.992 12.218 - 12.276: 86.4527% ( 20) 00:15:18.992 12.276 - 12.335: 86.6068% ( 17) 00:15:18.992 12.335 - 12.393: 86.8154% ( 23) 00:15:18.992 12.393 - 12.451: 86.9967% ( 20) 00:15:18.992 12.451 - 12.509: 87.1872% ( 21) 00:15:18.992 12.509 - 12.567: 87.3322% ( 16) 00:15:18.992 12.567 - 12.625: 87.5408% ( 23) 00:15:18.992 12.625 - 12.684: 87.6315% ( 10) 00:15:18.992 12.684 - 12.742: 87.8400% ( 23) 00:15:18.992 12.742 - 12.800: 88.0667% ( 25) 00:15:18.992 12.800 - 12.858: 88.3569% ( 32) 00:15:18.992 12.858 - 12.916: 88.8284% ( 52) 00:15:18.992 12.916 - 12.975: 89.3090% ( 53) 00:15:18.992 12.975 - 13.033: 89.7080% ( 44) 00:15:18.992 13.033 - 13.091: 90.1795% ( 52) 00:15:18.992 13.091 - 13.149: 90.5060% ( 36) 00:15:18.992 13.149 - 13.207: 90.7690% ( 29) 00:15:18.992 13.207 - 13.265: 90.9775% ( 23) 00:15:18.992 13.265 - 13.324: 91.0682% ( 10) 00:15:18.992 13.324 - 13.382: 91.1317% ( 7) 00:15:18.992 13.382 - 13.440: 91.2314% ( 11) 00:15:18.992 13.440 - 13.498: 91.2858% ( 6) 00:15:18.992 13.498 - 13.556: 91.3765% ( 10) 00:15:18.992 13.556 - 13.615: 91.5125% ( 15) 00:15:18.992 13.615 - 13.673: 91.5669% ( 6) 00:15:18.992 13.673 - 13.731: 91.6395% ( 8) 00:15:18.992 13.731 - 13.789: 91.7120% ( 8) 00:15:18.992 13.789 - 13.847: 91.7755% ( 7) 00:15:18.992 13.847 - 13.905: 91.8299% ( 6) 00:15:18.992 13.905 - 13.964: 91.8934% ( 7) 00:15:18.992 13.964 - 14.022: 91.9296% ( 4) 00:15:18.992 14.022 - 14.080: 92.0022% ( 8) 00:15:18.992 14.080 - 14.138: 92.0294% ( 3) 00:15:18.992 14.138 - 14.196: 92.0475% ( 2) 00:15:18.992 14.196 - 14.255: 92.0929% ( 5) 00:15:18.992 14.255 - 14.313: 92.1291% ( 4) 00:15:18.992 14.313 - 14.371: 92.1654% ( 4) 00:15:18.992 14.371 - 14.429: 92.2379% ( 8) 00:15:18.992 14.429 - 14.487: 92.3105% ( 8) 00:15:18.992 14.487 - 14.545: 92.4102% ( 11) 00:15:18.992 14.545 - 14.604: 92.5553% ( 16) 00:15:18.992 14.604 - 14.662: 92.7004% ( 16) 00:15:18.992 14.662 - 14.720: 92.8183% ( 13) 00:15:18.992 14.720 - 14.778: 93.0087% ( 21) 00:15:18.992 14.778 - 14.836: 93.1719% ( 18) 00:15:18.992 14.836 - 14.895: 93.3533% ( 20) 00:15:18.992 14.895 - 15.011: 93.6707% ( 35) 00:15:18.992 15.011 - 15.127: 93.8248% ( 17) 00:15:18.992 15.127 - 15.244: 93.9336% ( 12) 00:15:18.992 15.244 - 15.360: 94.0334% ( 11) 00:15:18.992 15.360 - 15.476: 94.1331% ( 11) 00:15:18.992 15.476 - 15.593: 94.4233% ( 32) 00:15:18.992 15.593 - 15.709: 94.6772% ( 28) 00:15:18.992 15.709 - 15.825: 94.8132% ( 15) 00:15:18.992 15.825 - 15.942: 94.8857% ( 8) 00:15:18.992 15.942 - 16.058: 94.9855% ( 11) 00:15:18.992 16.058 - 16.175: 95.0490% ( 7) 00:15:18.992 16.175 - 16.291: 95.1124% ( 7) 00:15:18.992 16.291 - 16.407: 95.1396% ( 3) 00:15:18.992 16.407 - 16.524: 95.1941% ( 6) 00:15:18.992 16.524 - 16.640: 95.2757% ( 9) 00:15:18.992 16.640 - 16.756: 95.3845% ( 12) 00:15:18.992 16.756 - 16.873: 95.4570% ( 8) 00:15:18.992 16.873 - 16.989: 95.5114% ( 6) 00:15:18.992 16.989 - 17.105: 95.5930% ( 9) 00:15:18.992 17.105 - 17.222: 95.7381% ( 16) 00:15:18.992 17.222 - 17.338: 95.8379% ( 11) 00:15:18.992 17.338 - 17.455: 95.9920% ( 17) 00:15:18.992 17.455 - 17.571: 96.1280% ( 15) 00:15:18.992 17.571 - 17.687: 96.2369% ( 12) 00:15:18.992 17.687 - 17.804: 96.4001% ( 18) 00:15:18.992 17.804 - 17.920: 96.4908% ( 10) 00:15:18.992 17.920 - 18.036: 96.5905% ( 11) 00:15:18.992 18.036 - 18.153: 96.6721% ( 9) 00:15:18.992 18.153 - 18.269: 96.7628% ( 10) 00:15:18.992 18.269 - 18.385: 96.8625% ( 11) 00:15:18.992 18.385 - 18.502: 96.9532% ( 10) 00:15:18.992 18.502 - 18.618: 96.9804% ( 3) 00:15:18.992 18.618 - 18.735: 97.0711% ( 10) 00:15:18.992 18.735 - 18.851: 97.1618% ( 10) 00:15:18.992 18.851 - 18.967: 97.1980% ( 4) 00:15:18.992 18.967 - 19.084: 97.3159% ( 13) 00:15:18.992 19.084 - 19.200: 97.3613% ( 5) 00:15:18.992 19.200 - 19.316: 97.4429% ( 9) 00:15:18.992 19.316 - 19.433: 97.5426% ( 11) 00:15:18.992 19.433 - 19.549: 97.6424% ( 11) 00:15:18.992 19.549 - 19.665: 97.7058% ( 7) 00:15:18.992 19.665 - 19.782: 97.7330% ( 3) 00:15:18.992 19.782 - 19.898: 97.7693% ( 4) 00:15:18.992 19.898 - 20.015: 97.8509% ( 9) 00:15:18.992 20.015 - 20.131: 97.9235% ( 8) 00:15:18.992 20.131 - 20.247: 97.9597% ( 4) 00:15:18.992 20.247 - 20.364: 98.0323% ( 8) 00:15:18.992 20.364 - 20.480: 98.1230% ( 10) 00:15:18.992 20.480 - 20.596: 98.1683% ( 5) 00:15:18.992 20.596 - 20.713: 98.2046% ( 4) 00:15:18.992 20.713 - 20.829: 98.2590% ( 6) 00:15:18.992 20.829 - 20.945: 98.2862% ( 3) 00:15:18.992 20.945 - 21.062: 98.4313% ( 16) 00:15:18.992 21.062 - 21.178: 98.4766% ( 5) 00:15:18.992 21.178 - 21.295: 98.5673% ( 10) 00:15:18.992 21.295 - 21.411: 98.6217% ( 6) 00:15:18.992 21.411 - 21.527: 98.6670% ( 5) 00:15:18.992 21.527 - 21.644: 98.7214% ( 6) 00:15:18.992 21.644 - 21.760: 98.7577% ( 4) 00:15:18.992 21.760 - 21.876: 98.8212% ( 7) 00:15:18.992 21.876 - 21.993: 98.8847% ( 7) 00:15:18.992 21.993 - 22.109: 98.9028% ( 2) 00:15:18.992 22.109 - 22.225: 98.9753% ( 8) 00:15:18.992 22.225 - 22.342: 99.0297% ( 6) 00:15:18.992 22.342 - 22.458: 99.0932% ( 7) 00:15:18.992 22.458 - 22.575: 99.1114% ( 2) 00:15:18.992 22.575 - 22.691: 99.1748% ( 7) 00:15:18.992 22.691 - 22.807: 99.2020% ( 3) 00:15:18.992 22.807 - 22.924: 99.2746% ( 8) 00:15:18.992 22.924 - 23.040: 99.2927% ( 2) 00:15:18.992 23.040 - 23.156: 99.3199% ( 3) 00:15:18.992 23.156 - 23.273: 99.3471% ( 3) 00:15:18.992 23.273 - 23.389: 99.3834% ( 4) 00:15:18.992 23.389 - 23.505: 99.3925% ( 1) 00:15:18.992 23.622 - 23.738: 99.4106% ( 2) 00:15:18.992 23.738 - 23.855: 99.4197% ( 1) 00:15:18.992 23.855 - 23.971: 99.4287% ( 1) 00:15:18.992 23.971 - 24.087: 99.4378% ( 1) 00:15:18.992 24.087 - 24.204: 99.4469% ( 1) 00:15:18.992 24.436 - 24.553: 99.4559% ( 1) 00:15:18.992 24.553 - 24.669: 99.4650% ( 1) 00:15:18.992 24.669 - 24.785: 99.4741% ( 1) 00:15:18.992 24.785 - 24.902: 99.4831% ( 1) 00:15:18.992 24.902 - 25.018: 99.4922% ( 1) 00:15:18.992 25.018 - 25.135: 99.5103% ( 2) 00:15:18.992 25.135 - 25.251: 99.5375% ( 3) 00:15:18.992 25.251 - 25.367: 99.5466% ( 1) 00:15:18.992 25.367 - 25.484: 99.5557% ( 1) 00:15:18.992 25.484 - 25.600: 99.5647% ( 1) 00:15:18.992 25.600 - 25.716: 99.5829% ( 2) 00:15:18.992 25.833 - 25.949: 99.5919% ( 1) 00:15:18.992 26.065 - 26.182: 99.6010% ( 1) 00:15:18.992 26.298 - 26.415: 99.6101% ( 1) 00:15:18.992 26.531 - 26.647: 99.6192% ( 1) 00:15:18.992 26.647 - 26.764: 99.6282% ( 1) 00:15:18.992 26.996 - 27.113: 99.6464% ( 2) 00:15:18.992 27.113 - 27.229: 99.6554% ( 1) 00:15:18.992 27.927 - 28.044: 99.6645% ( 1) 00:15:18.992 28.160 - 28.276: 99.6736% ( 1) 00:15:18.992 28.742 - 28.858: 99.6826% ( 1) 00:15:18.992 29.207 - 29.324: 99.6917% ( 1) 00:15:18.992 29.556 - 29.673: 99.7008% ( 1) 00:15:18.992 29.789 - 30.022: 99.7189% ( 2) 00:15:18.992 30.022 - 30.255: 99.7370% ( 2) 00:15:18.992 30.255 - 30.487: 99.7461% ( 1) 00:15:18.992 30.487 - 30.720: 99.7552% ( 1) 00:15:18.992 31.418 - 31.651: 99.7642% ( 1) 00:15:18.992 32.349 - 32.582: 99.7733% ( 1) 00:15:18.992 32.582 - 32.815: 99.7824% ( 1) 00:15:18.992 33.047 - 33.280: 99.7914% ( 1) 00:15:18.992 33.280 - 33.513: 99.8005% ( 1) 00:15:18.992 33.513 - 33.745: 99.8096% ( 1) 00:15:18.992 34.676 - 34.909: 99.8277% ( 2) 00:15:18.992 35.142 - 35.375: 99.8549% ( 3) 00:15:18.992 36.538 - 36.771: 99.8640% ( 1) 00:15:18.992 38.865 - 39.098: 99.8731% ( 1) 00:15:18.992 39.098 - 39.331: 99.8912% ( 2) 00:15:18.992 39.564 - 39.796: 99.9003% ( 1) 00:15:18.992 40.960 - 41.193: 99.9093% ( 1) 00:15:18.992 41.193 - 41.425: 99.9184% ( 1) 00:15:18.992 41.658 - 41.891: 99.9275% ( 1) 00:15:18.992 42.822 - 43.055: 99.9365% ( 1) 00:15:18.992 45.149 - 45.382: 99.9456% ( 1) 00:15:18.992 45.382 - 45.615: 99.9547% ( 1) 00:15:18.992 54.225 - 54.458: 99.9637% ( 1) 00:15:18.992 55.389 - 55.622: 99.9728% ( 1) 00:15:18.992 90.298 - 90.764: 99.9819% ( 1) 00:15:18.992 120.087 - 121.018: 99.9909% ( 1) 00:15:18.992 124.742 - 125.673: 100.0000% ( 1) 00:15:18.992 00:15:18.992 00:15:18.992 real 0m1.357s 00:15:18.992 user 0m1.125s 00:15:18.992 sys 0m0.170s 00:15:18.992 13:09:25 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.992 13:09:25 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:15:18.992 ************************************ 00:15:18.992 END TEST nvme_overhead 00:15:18.992 ************************************ 00:15:18.992 13:09:25 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:15:18.992 13:09:25 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:15:18.992 13:09:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.992 13:09:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:18.992 ************************************ 00:15:18.992 START TEST nvme_arbitration 00:15:18.992 ************************************ 00:15:18.992 13:09:25 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:15:22.274 Initializing NVMe Controllers 00:15:22.274 Attached to 0000:00:10.0 00:15:22.274 Attached to 0000:00:11.0 00:15:22.274 Attached to 0000:00:13.0 00:15:22.274 Attached to 0000:00:12.0 00:15:22.274 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:15:22.274 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:15:22.274 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:15:22.274 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:15:22.274 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:15:22.274 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:15:22.274 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:15:22.274 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:15:22.274 Initialization complete. Launching workers. 00:15:22.274 Starting thread on core 1 with urgent priority queue 00:15:22.274 Starting thread on core 2 with urgent priority queue 00:15:22.274 Starting thread on core 3 with urgent priority queue 00:15:22.274 Starting thread on core 0 with urgent priority queue 00:15:22.274 QEMU NVMe Ctrl (12340 ) core 0: 746.67 IO/s 133.93 secs/100000 ios 00:15:22.274 QEMU NVMe Ctrl (12342 ) core 0: 746.67 IO/s 133.93 secs/100000 ios 00:15:22.274 QEMU NVMe Ctrl (12341 ) core 1: 682.67 IO/s 146.48 secs/100000 ios 00:15:22.274 QEMU NVMe Ctrl (12342 ) core 1: 682.67 IO/s 146.48 secs/100000 ios 00:15:22.274 QEMU NVMe Ctrl (12343 ) core 2: 512.00 IO/s 195.31 secs/100000 ios 00:15:22.274 QEMU NVMe Ctrl (12342 ) core 3: 682.67 IO/s 146.48 secs/100000 ios 00:15:22.274 ======================================================== 00:15:22.274 00:15:22.274 00:15:22.274 real 0m3.426s 00:15:22.274 user 0m9.326s 00:15:22.274 sys 0m0.161s 00:15:22.274 13:09:28 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.274 13:09:28 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:15:22.274 ************************************ 00:15:22.274 END TEST nvme_arbitration 00:15:22.274 ************************************ 00:15:22.274 13:09:28 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:15:22.274 13:09:28 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:22.274 13:09:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.274 13:09:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:22.274 ************************************ 00:15:22.274 START TEST nvme_single_aen 00:15:22.274 ************************************ 00:15:22.274 13:09:28 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:15:22.840 Asynchronous Event Request test 00:15:22.841 Attached to 0000:00:10.0 00:15:22.841 Attached to 0000:00:11.0 00:15:22.841 Attached to 0000:00:13.0 00:15:22.841 Attached to 0000:00:12.0 00:15:22.841 Reset controller to setup AER completions for this process 00:15:22.841 Registering asynchronous event callbacks... 00:15:22.841 Getting orig temperature thresholds of all controllers 00:15:22.841 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:22.841 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:22.841 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:22.841 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:22.841 Setting all controllers temperature threshold low to trigger AER 00:15:22.841 Waiting for all controllers temperature threshold to be set lower 00:15:22.841 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:22.841 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:15:22.841 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:22.841 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:15:22.841 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:22.841 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:15:22.841 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:22.841 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:15:22.841 Waiting for all controllers to trigger AER and reset threshold 00:15:22.841 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:22.841 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:22.841 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:22.841 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:22.841 Cleaning up... 00:15:22.841 00:15:22.841 real 0m0.332s 00:15:22.841 user 0m0.122s 00:15:22.841 sys 0m0.165s 00:15:22.841 ************************************ 00:15:22.841 END TEST nvme_single_aen 00:15:22.841 ************************************ 00:15:22.841 13:09:29 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.841 13:09:29 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:15:22.841 13:09:29 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:15:22.841 13:09:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:22.841 13:09:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.841 13:09:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:22.841 ************************************ 00:15:22.841 START TEST nvme_doorbell_aers 00:15:22.841 ************************************ 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:22.841 13:09:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:23.099 [2024-12-06 13:09:29.516758] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:15:33.091 Executing: test_write_invalid_db 00:15:33.091 Waiting for AER completion... 00:15:33.091 Failure: test_write_invalid_db 00:15:33.091 00:15:33.091 Executing: test_invalid_db_write_overflow_sq 00:15:33.091 Waiting for AER completion... 00:15:33.091 Failure: test_invalid_db_write_overflow_sq 00:15:33.091 00:15:33.091 Executing: test_invalid_db_write_overflow_cq 00:15:33.091 Waiting for AER completion... 00:15:33.091 Failure: test_invalid_db_write_overflow_cq 00:15:33.091 00:15:33.091 13:09:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:33.091 13:09:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:33.091 [2024-12-06 13:09:39.569148] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:15:43.055 Executing: test_write_invalid_db 00:15:43.055 Waiting for AER completion... 00:15:43.055 Failure: test_write_invalid_db 00:15:43.055 00:15:43.055 Executing: test_invalid_db_write_overflow_sq 00:15:43.055 Waiting for AER completion... 00:15:43.055 Failure: test_invalid_db_write_overflow_sq 00:15:43.055 00:15:43.055 Executing: test_invalid_db_write_overflow_cq 00:15:43.055 Waiting for AER completion... 00:15:43.055 Failure: test_invalid_db_write_overflow_cq 00:15:43.055 00:15:43.055 13:09:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:43.055 13:09:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:43.313 [2024-12-06 13:09:49.656735] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:15:53.296 Executing: test_write_invalid_db 00:15:53.296 Waiting for AER completion... 00:15:53.296 Failure: test_write_invalid_db 00:15:53.296 00:15:53.296 Executing: test_invalid_db_write_overflow_sq 00:15:53.296 Waiting for AER completion... 00:15:53.296 Failure: test_invalid_db_write_overflow_sq 00:15:53.296 00:15:53.296 Executing: test_invalid_db_write_overflow_cq 00:15:53.296 Waiting for AER completion... 00:15:53.296 Failure: test_invalid_db_write_overflow_cq 00:15:53.296 00:15:53.296 13:09:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:53.296 13:09:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:53.296 [2024-12-06 13:09:59.630184] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 Executing: test_write_invalid_db 00:16:03.264 Waiting for AER completion... 00:16:03.264 Failure: test_write_invalid_db 00:16:03.264 00:16:03.264 Executing: test_invalid_db_write_overflow_sq 00:16:03.264 Waiting for AER completion... 00:16:03.264 Failure: test_invalid_db_write_overflow_sq 00:16:03.264 00:16:03.264 Executing: test_invalid_db_write_overflow_cq 00:16:03.264 Waiting for AER completion... 00:16:03.264 Failure: test_invalid_db_write_overflow_cq 00:16:03.264 00:16:03.264 00:16:03.264 real 0m40.257s 00:16:03.264 user 0m34.316s 00:16:03.264 sys 0m5.535s 00:16:03.264 ************************************ 00:16:03.264 END TEST nvme_doorbell_aers 00:16:03.264 ************************************ 00:16:03.264 13:10:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.264 13:10:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:16:03.264 13:10:09 nvme -- nvme/nvme.sh@97 -- # uname 00:16:03.264 13:10:09 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:16:03.264 13:10:09 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:16:03.264 13:10:09 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:16:03.264 13:10:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.264 13:10:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:03.264 ************************************ 00:16:03.264 START TEST nvme_multi_aen 00:16:03.264 ************************************ 00:16:03.264 13:10:09 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:16:03.264 [2024-12-06 13:10:09.722470] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.722806] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.722869] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.725046] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.725120] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.725153] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.727042] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.727308] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.727601] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.729384] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.729579] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 [2024-12-06 13:10:09.729611] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65091) is not found. Dropping the request. 00:16:03.264 Child process pid: 65607 00:16:03.522 [Child] Asynchronous Event Request test 00:16:03.522 [Child] Attached to 0000:00:10.0 00:16:03.522 [Child] Attached to 0000:00:11.0 00:16:03.522 [Child] Attached to 0000:00:13.0 00:16:03.522 [Child] Attached to 0000:00:12.0 00:16:03.522 [Child] Registering asynchronous event callbacks... 00:16:03.522 [Child] Getting orig temperature thresholds of all controllers 00:16:03.522 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:03.522 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:03.522 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:03.522 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:03.522 [Child] Waiting for all controllers to trigger AER and reset threshold 00:16:03.522 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:03.522 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:03.522 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:03.523 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:03.523 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:03.523 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:03.523 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:03.523 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:03.523 [Child] Cleaning up... 00:16:03.781 Asynchronous Event Request test 00:16:03.781 Attached to 0000:00:10.0 00:16:03.781 Attached to 0000:00:11.0 00:16:03.781 Attached to 0000:00:13.0 00:16:03.781 Attached to 0000:00:12.0 00:16:03.781 Reset controller to setup AER completions for this process 00:16:03.781 Registering asynchronous event callbacks... 00:16:03.781 Getting orig temperature thresholds of all controllers 00:16:03.781 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:03.781 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:03.781 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:03.781 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:03.781 Setting all controllers temperature threshold low to trigger AER 00:16:03.781 Waiting for all controllers temperature threshold to be set lower 00:16:03.781 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:03.781 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:16:03.781 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:03.781 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:16:03.781 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:03.781 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:16:03.781 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:03.781 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:16:03.781 Waiting for all controllers to trigger AER and reset threshold 00:16:03.781 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:03.781 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:03.781 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:03.781 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:03.781 Cleaning up... 00:16:03.781 ************************************ 00:16:03.781 END TEST nvme_multi_aen 00:16:03.781 ************************************ 00:16:03.781 00:16:03.781 real 0m0.629s 00:16:03.781 user 0m0.242s 00:16:03.781 sys 0m0.271s 00:16:03.781 13:10:10 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.781 13:10:10 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:16:03.781 13:10:10 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:16:03.781 13:10:10 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:03.781 13:10:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.781 13:10:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:03.781 ************************************ 00:16:03.781 START TEST nvme_startup 00:16:03.781 ************************************ 00:16:03.781 13:10:10 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:16:04.039 Initializing NVMe Controllers 00:16:04.039 Attached to 0000:00:10.0 00:16:04.039 Attached to 0000:00:11.0 00:16:04.039 Attached to 0000:00:13.0 00:16:04.039 Attached to 0000:00:12.0 00:16:04.039 Initialization complete. 00:16:04.039 Time used:201535.031 (us). 00:16:04.039 00:16:04.039 real 0m0.284s 00:16:04.039 user 0m0.105s 00:16:04.039 sys 0m0.129s 00:16:04.039 13:10:10 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.039 13:10:10 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:16:04.039 ************************************ 00:16:04.039 END TEST nvme_startup 00:16:04.039 ************************************ 00:16:04.039 13:10:10 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:16:04.039 13:10:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:04.039 13:10:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.039 13:10:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.039 ************************************ 00:16:04.039 START TEST nvme_multi_secondary 00:16:04.039 ************************************ 00:16:04.039 13:10:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:16:04.039 13:10:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65663 00:16:04.039 13:10:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:16:04.039 13:10:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65664 00:16:04.039 13:10:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:16:04.039 13:10:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:16:07.321 Initializing NVMe Controllers 00:16:07.321 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:07.321 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:07.321 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:07.321 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:07.321 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:16:07.321 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:16:07.321 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:16:07.321 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:16:07.321 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:16:07.321 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:16:07.321 Initialization complete. Launching workers. 00:16:07.321 ======================================================== 00:16:07.321 Latency(us) 00:16:07.321 Device Information : IOPS MiB/s Average min max 00:16:07.321 PCIE (0000:00:10.0) NSID 1 from core 2: 2427.73 9.48 6588.17 1299.43 14331.58 00:16:07.321 PCIE (0000:00:11.0) NSID 1 from core 2: 2427.73 9.48 6589.99 1199.84 13854.14 00:16:07.321 PCIE (0000:00:13.0) NSID 1 from core 2: 2427.73 9.48 6591.01 1335.14 13936.57 00:16:07.321 PCIE (0000:00:12.0) NSID 1 from core 2: 2427.73 9.48 6590.32 1338.05 16094.11 00:16:07.321 PCIE (0000:00:12.0) NSID 2 from core 2: 2427.73 9.48 6591.31 1363.47 16291.06 00:16:07.321 PCIE (0000:00:12.0) NSID 3 from core 2: 2427.73 9.48 6591.25 1340.04 16787.26 00:16:07.321 ======================================================== 00:16:07.321 Total : 14566.35 56.90 6590.34 1199.84 16787.26 00:16:07.321 00:16:07.580 13:10:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65663 00:16:07.580 Initializing NVMe Controllers 00:16:07.580 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:07.580 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:07.580 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:07.580 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:07.580 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:16:07.580 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:16:07.580 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:16:07.580 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:16:07.580 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:16:07.580 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:16:07.580 Initialization complete. Launching workers. 00:16:07.580 ======================================================== 00:16:07.580 Latency(us) 00:16:07.580 Device Information : IOPS MiB/s Average min max 00:16:07.580 PCIE (0000:00:10.0) NSID 1 from core 1: 5267.53 20.58 3035.55 962.71 6820.57 00:16:07.580 PCIE (0000:00:11.0) NSID 1 from core 1: 5267.53 20.58 3037.28 986.87 6496.91 00:16:07.580 PCIE (0000:00:13.0) NSID 1 from core 1: 5267.53 20.58 3037.42 973.80 6706.06 00:16:07.580 PCIE (0000:00:12.0) NSID 1 from core 1: 5267.53 20.58 3038.03 972.75 6266.60 00:16:07.580 PCIE (0000:00:12.0) NSID 2 from core 1: 5267.53 20.58 3038.12 976.49 7312.59 00:16:07.580 PCIE (0000:00:12.0) NSID 3 from core 1: 5267.53 20.58 3038.23 982.75 6960.49 00:16:07.580 ======================================================== 00:16:07.580 Total : 31605.17 123.46 3037.44 962.71 7312.59 00:16:07.580 00:16:10.108 Initializing NVMe Controllers 00:16:10.108 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:10.108 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:10.108 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:10.108 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:10.108 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:10.108 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:16:10.108 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:16:10.108 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:16:10.108 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:16:10.108 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:16:10.108 Initialization complete. Launching workers. 00:16:10.108 ======================================================== 00:16:10.108 Latency(us) 00:16:10.108 Device Information : IOPS MiB/s Average min max 00:16:10.108 PCIE (0000:00:10.0) NSID 1 from core 0: 8135.86 31.78 1964.95 932.41 6989.12 00:16:10.108 PCIE (0000:00:11.0) NSID 1 from core 0: 8135.86 31.78 1965.94 950.39 7417.15 00:16:10.108 PCIE (0000:00:13.0) NSID 1 from core 0: 8135.86 31.78 1965.82 849.98 7928.17 00:16:10.108 PCIE (0000:00:12.0) NSID 1 from core 0: 8135.86 31.78 1965.70 816.78 7847.51 00:16:10.108 PCIE (0000:00:12.0) NSID 2 from core 0: 8135.86 31.78 1965.58 761.99 6990.12 00:16:10.108 PCIE (0000:00:12.0) NSID 3 from core 0: 8135.86 31.78 1965.46 711.42 7114.38 00:16:10.108 ======================================================== 00:16:10.108 Total : 48815.16 190.68 1965.57 711.42 7928.17 00:16:10.108 00:16:10.108 13:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65664 00:16:10.108 13:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65739 00:16:10.108 13:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:16:10.108 13:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65740 00:16:10.108 13:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:16:10.108 13:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:16:13.392 Initializing NVMe Controllers 00:16:13.392 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:13.392 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:13.392 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:13.392 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:13.392 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:16:13.392 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:16:13.392 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:16:13.392 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:16:13.392 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:16:13.392 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:16:13.392 Initialization complete. Launching workers. 00:16:13.392 ======================================================== 00:16:13.392 Latency(us) 00:16:13.392 Device Information : IOPS MiB/s Average min max 00:16:13.392 PCIE (0000:00:10.0) NSID 1 from core 1: 5446.79 21.28 2935.67 1131.99 8649.25 00:16:13.392 PCIE (0000:00:11.0) NSID 1 from core 1: 5446.79 21.28 2937.12 1214.83 8874.94 00:16:13.392 PCIE (0000:00:13.0) NSID 1 from core 1: 5446.79 21.28 2937.09 1231.89 8308.37 00:16:13.392 PCIE (0000:00:12.0) NSID 1 from core 1: 5446.79 21.28 2937.02 1174.59 8608.67 00:16:13.392 PCIE (0000:00:12.0) NSID 2 from core 1: 5446.79 21.28 2937.01 1145.05 8877.59 00:16:13.392 PCIE (0000:00:12.0) NSID 3 from core 1: 5446.79 21.28 2937.02 1165.03 8997.53 00:16:13.392 ======================================================== 00:16:13.392 Total : 32680.72 127.66 2936.82 1131.99 8997.53 00:16:13.392 00:16:13.392 Initializing NVMe Controllers 00:16:13.392 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:13.392 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:13.392 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:13.392 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:13.392 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:13.392 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:16:13.392 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:16:13.392 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:16:13.392 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:16:13.392 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:16:13.392 Initialization complete. Launching workers. 00:16:13.392 ======================================================== 00:16:13.392 Latency(us) 00:16:13.392 Device Information : IOPS MiB/s Average min max 00:16:13.392 PCIE (0000:00:10.0) NSID 1 from core 0: 5667.70 22.14 2821.13 941.32 8170.60 00:16:13.392 PCIE (0000:00:11.0) NSID 1 from core 0: 5667.70 22.14 2822.33 988.96 8131.47 00:16:13.392 PCIE (0000:00:13.0) NSID 1 from core 0: 5667.70 22.14 2822.31 986.31 8220.93 00:16:13.392 PCIE (0000:00:12.0) NSID 1 from core 0: 5667.70 22.14 2822.21 992.62 8138.02 00:16:13.392 PCIE (0000:00:12.0) NSID 2 from core 0: 5667.70 22.14 2822.08 953.72 8163.15 00:16:13.392 PCIE (0000:00:12.0) NSID 3 from core 0: 5667.70 22.14 2821.98 990.86 8133.34 00:16:13.392 ======================================================== 00:16:13.392 Total : 34006.18 132.84 2822.01 941.32 8220.93 00:16:13.392 00:16:15.291 Initializing NVMe Controllers 00:16:15.291 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:15.291 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:15.291 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:15.291 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:15.291 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:16:15.291 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:16:15.291 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:16:15.291 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:16:15.291 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:16:15.291 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:16:15.291 Initialization complete. Launching workers. 00:16:15.291 ======================================================== 00:16:15.291 Latency(us) 00:16:15.291 Device Information : IOPS MiB/s Average min max 00:16:15.291 PCIE (0000:00:10.0) NSID 1 from core 2: 3637.65 14.21 4395.47 958.45 18719.02 00:16:15.291 PCIE (0000:00:11.0) NSID 1 from core 2: 3637.65 14.21 4397.66 974.08 20535.96 00:16:15.291 PCIE (0000:00:13.0) NSID 1 from core 2: 3637.65 14.21 4397.35 997.69 20328.98 00:16:15.291 PCIE (0000:00:12.0) NSID 1 from core 2: 3637.65 14.21 4396.61 957.79 16833.76 00:16:15.291 PCIE (0000:00:12.0) NSID 2 from core 2: 3637.65 14.21 4393.65 893.11 15063.68 00:16:15.291 PCIE (0000:00:12.0) NSID 3 from core 2: 3637.65 14.21 4393.79 828.34 19128.23 00:16:15.291 ======================================================== 00:16:15.291 Total : 21825.91 85.26 4395.76 828.34 20535.96 00:16:15.292 00:16:15.292 ************************************ 00:16:15.292 END TEST nvme_multi_secondary 00:16:15.292 ************************************ 00:16:15.292 13:10:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65739 00:16:15.292 13:10:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65740 00:16:15.292 00:16:15.292 real 0m11.282s 00:16:15.292 user 0m18.698s 00:16:15.292 sys 0m0.930s 00:16:15.292 13:10:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.292 13:10:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:16:15.292 13:10:21 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:16:15.292 13:10:21 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:16:15.292 13:10:21 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64671 ]] 00:16:15.292 13:10:21 nvme -- common/autotest_common.sh@1094 -- # kill 64671 00:16:15.292 13:10:21 nvme -- common/autotest_common.sh@1095 -- # wait 64671 00:16:15.292 [2024-12-06 13:10:21.793987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.794096] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.794147] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.794176] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.797513] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.797816] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.797878] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.797911] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.801249] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.801333] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.801366] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.801396] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.804173] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.804356] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.804382] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.292 [2024-12-06 13:10:21.804400] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request. 00:16:15.549 13:10:21 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:16:15.549 13:10:21 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:16:15.549 13:10:21 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:16:15.549 13:10:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.549 13:10:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.549 13:10:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.549 ************************************ 00:16:15.549 START TEST bdev_nvme_reset_stuck_adm_cmd 00:16:15.549 ************************************ 00:16:15.549 13:10:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:16:15.549 * Looking for test storage... 00:16:15.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:15.549 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:15.549 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:16:15.549 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:16:15.807 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:15.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.808 --rc genhtml_branch_coverage=1 00:16:15.808 --rc genhtml_function_coverage=1 00:16:15.808 --rc genhtml_legend=1 00:16:15.808 --rc geninfo_all_blocks=1 00:16:15.808 --rc geninfo_unexecuted_blocks=1 00:16:15.808 00:16:15.808 ' 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:15.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.808 --rc genhtml_branch_coverage=1 00:16:15.808 --rc genhtml_function_coverage=1 00:16:15.808 --rc genhtml_legend=1 00:16:15.808 --rc geninfo_all_blocks=1 00:16:15.808 --rc geninfo_unexecuted_blocks=1 00:16:15.808 00:16:15.808 ' 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:15.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.808 --rc genhtml_branch_coverage=1 00:16:15.808 --rc genhtml_function_coverage=1 00:16:15.808 --rc genhtml_legend=1 00:16:15.808 --rc geninfo_all_blocks=1 00:16:15.808 --rc geninfo_unexecuted_blocks=1 00:16:15.808 00:16:15.808 ' 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:15.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.808 --rc genhtml_branch_coverage=1 00:16:15.808 --rc genhtml_function_coverage=1 00:16:15.808 --rc genhtml_legend=1 00:16:15.808 --rc geninfo_all_blocks=1 00:16:15.808 --rc geninfo_unexecuted_blocks=1 00:16:15.808 00:16:15.808 ' 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:16:15.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65906 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65906 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65906 ']' 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.808 13:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:16.065 [2024-12-06 13:10:22.338335] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:16.066 [2024-12-06 13:10:22.338945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65906 ] 00:16:16.066 [2024-12-06 13:10:22.534881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.323 [2024-12-06 13:10:22.643146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.323 [2024-12-06 13:10:22.643262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.323 [2024-12-06 13:10:22.643345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.323 [2024-12-06 13:10:22.643373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:17.255 nvme0n1 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_NA51z.txt 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:17.255 true 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733490623 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65929 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:17.255 13:10:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:16:19.155 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:19.155 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.155 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:19.155 [2024-12-06 13:10:25.581814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:16:19.156 [2024-12-06 13:10:25.582196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:19.156 [2024-12-06 13:10:25.582238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:19.156 [2024-12-06 13:10:25.582259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.156 [2024-12-06 13:10:25.584355] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:16:19.156 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65929 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65929 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65929 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:16:19.156 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_NA51z.txt 00:16:19.413 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:16:19.413 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:16:19.413 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:16:19.413 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:16:19.413 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:16:19.413 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:16:19.413 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:16:19.413 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_NA51z.txt 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65906 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65906 ']' 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65906 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65906 00:16:19.414 killing process with pid 65906 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65906' 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65906 00:16:19.414 13:10:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65906 00:16:21.400 13:10:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:16:21.400 13:10:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:16:21.400 ************************************ 00:16:21.400 END TEST bdev_nvme_reset_stuck_adm_cmd 00:16:21.400 ************************************ 00:16:21.400 00:16:21.400 real 0m5.938s 00:16:21.400 user 0m21.226s 00:16:21.400 sys 0m0.638s 00:16:21.400 13:10:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.400 13:10:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:21.659 13:10:27 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:16:21.659 13:10:27 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:16:21.659 13:10:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:21.659 13:10:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.659 13:10:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:21.659 ************************************ 00:16:21.659 START TEST nvme_fio 00:16:21.659 ************************************ 00:16:21.659 13:10:27 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:16:21.659 13:10:27 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:21.659 13:10:27 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:16:21.659 13:10:27 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:16:21.659 13:10:27 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:21.659 13:10:27 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:16:21.659 13:10:27 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:21.659 13:10:27 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:21.659 13:10:27 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:21.659 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:16:21.659 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:21.659 13:10:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:16:21.659 13:10:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:16:21.659 13:10:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:21.659 13:10:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:21.659 13:10:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:21.918 13:10:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:21.918 13:10:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:22.177 13:10:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:22.177 13:10:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:22.177 13:10:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:22.436 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:22.436 fio-3.35 00:16:22.436 Starting 1 thread 00:16:25.755 00:16:25.755 test: (groupid=0, jobs=1): err= 0: pid=66076: Fri Dec 6 13:10:31 2024 00:16:25.755 read: IOPS=13.5k, BW=52.8MiB/s (55.3MB/s)(106MiB/2001msec) 00:16:25.755 slat (nsec): min=4587, max=66382, avg=7313.24, stdev=2760.79 00:16:25.755 clat (usec): min=277, max=10625, avg=4717.83, stdev=811.64 00:16:25.755 lat (usec): min=283, max=10691, avg=4725.15, stdev=812.58 00:16:25.755 clat percentiles (usec): 00:16:25.755 | 1.00th=[ 3359], 5.00th=[ 3687], 10.00th=[ 3982], 20.00th=[ 4228], 00:16:25.755 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4686], 00:16:25.755 | 70.00th=[ 4817], 80.00th=[ 5080], 90.00th=[ 5800], 95.00th=[ 6390], 00:16:25.755 | 99.00th=[ 7635], 99.50th=[ 8029], 99.90th=[ 8455], 99.95th=[ 9503], 00:16:25.755 | 99.99th=[10552] 00:16:25.755 bw ( KiB/s): min=50328, max=55072, per=97.33%, avg=52594.33, stdev=2379.05, samples=3 00:16:25.755 iops : min=12582, max=13768, avg=13148.33, stdev=594.80, samples=3 00:16:25.755 write: IOPS=13.5k, BW=52.7MiB/s (55.3MB/s)(105MiB/2001msec); 0 zone resets 00:16:25.755 slat (nsec): min=4666, max=92305, avg=7358.83, stdev=2782.12 00:16:25.755 clat (usec): min=339, max=10519, avg=4727.00, stdev=814.15 00:16:25.755 lat (usec): min=346, max=10534, avg=4734.36, stdev=815.13 00:16:25.755 clat percentiles (usec): 00:16:25.755 | 1.00th=[ 3392], 5.00th=[ 3687], 10.00th=[ 4015], 20.00th=[ 4228], 00:16:25.755 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4686], 00:16:25.755 | 70.00th=[ 4817], 80.00th=[ 5080], 90.00th=[ 5800], 95.00th=[ 6390], 00:16:25.755 | 99.00th=[ 7701], 99.50th=[ 8094], 99.90th=[ 8586], 99.95th=[ 9503], 00:16:25.755 | 99.99th=[10290] 00:16:25.755 bw ( KiB/s): min=50744, max=54832, per=97.57%, avg=52663.67, stdev=2055.31, samples=3 00:16:25.755 iops : min=12686, max=13706, avg=13165.00, stdev=512.82, samples=3 00:16:25.755 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:16:25.755 lat (msec) : 2=0.06%, 4=10.08%, 10=89.79%, 20=0.03% 00:16:25.755 cpu : usr=98.65%, sys=0.15%, ctx=4, majf=0, minf=609 00:16:25.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:25.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:25.755 issued rwts: total=27032,27000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:25.755 00:16:25.755 Run status group 0 (all jobs): 00:16:25.755 READ: bw=52.8MiB/s (55.3MB/s), 52.8MiB/s-52.8MiB/s (55.3MB/s-55.3MB/s), io=106MiB (111MB), run=2001-2001msec 00:16:25.755 WRITE: bw=52.7MiB/s (55.3MB/s), 52.7MiB/s-52.7MiB/s (55.3MB/s-55.3MB/s), io=105MiB (111MB), run=2001-2001msec 00:16:25.755 ----------------------------------------------------- 00:16:25.755 Suppressions used: 00:16:25.755 count bytes template 00:16:25.755 1 32 /usr/src/fio/parse.c 00:16:25.755 1 8 libtcmalloc_minimal.so 00:16:25.755 ----------------------------------------------------- 00:16:25.755 00:16:25.755 13:10:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:25.755 13:10:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:25.755 13:10:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:16:25.755 13:10:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:26.014 13:10:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:16:26.014 13:10:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:26.272 13:10:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:26.272 13:10:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:26.272 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:26.272 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:26.272 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:26.272 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:26.272 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:26.272 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:16:26.272 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:26.272 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:26.273 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:26.273 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:26.273 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:16:26.273 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:26.273 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:26.273 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:16:26.273 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:26.273 13:10:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:26.273 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:26.273 fio-3.35 00:16:26.273 Starting 1 thread 00:16:29.557 00:16:29.557 test: (groupid=0, jobs=1): err= 0: pid=66142: Fri Dec 6 13:10:35 2024 00:16:29.557 read: IOPS=15.2k, BW=59.2MiB/s (62.1MB/s)(118MiB/2001msec) 00:16:29.557 slat (nsec): min=4587, max=73816, avg=6378.65, stdev=2029.33 00:16:29.557 clat (usec): min=506, max=9546, avg=4195.57, stdev=635.11 00:16:29.557 lat (usec): min=513, max=9619, avg=4201.94, stdev=635.85 00:16:29.557 clat percentiles (usec): 00:16:29.557 | 1.00th=[ 2933], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3785], 00:16:29.557 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4113], 00:16:29.557 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5080], 00:16:29.557 | 99.00th=[ 6587], 99.50th=[ 7177], 99.90th=[ 7963], 99.95th=[ 8225], 00:16:29.557 | 99.99th=[ 9372] 00:16:29.557 bw ( KiB/s): min=59696, max=65160, per=100.00%, avg=62552.00, stdev=2740.43, samples=3 00:16:29.557 iops : min=14924, max=16290, avg=15638.00, stdev=685.11, samples=3 00:16:29.557 write: IOPS=15.2k, BW=59.3MiB/s (62.2MB/s)(119MiB/2001msec); 0 zone resets 00:16:29.557 slat (usec): min=4, max=108, avg= 6.53, stdev= 2.10 00:16:29.557 clat (usec): min=329, max=9379, avg=4206.39, stdev=650.54 00:16:29.557 lat (usec): min=336, max=9396, avg=4212.92, stdev=651.27 00:16:29.557 clat percentiles (usec): 00:16:29.557 | 1.00th=[ 2933], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3785], 00:16:29.557 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 4015], 60.00th=[ 4113], 00:16:29.557 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5145], 00:16:29.557 | 99.00th=[ 6652], 99.50th=[ 7242], 99.90th=[ 8029], 99.95th=[ 8455], 00:16:29.557 | 99.99th=[ 9241] 00:16:29.557 bw ( KiB/s): min=59016, max=64088, per=100.00%, avg=62093.33, stdev=2703.78, samples=3 00:16:29.557 iops : min=14754, max=16022, avg=15523.33, stdev=675.94, samples=3 00:16:29.557 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:16:29.557 lat (msec) : 2=0.06%, 4=50.44%, 10=49.47% 00:16:29.557 cpu : usr=99.00%, sys=0.05%, ctx=3, majf=0, minf=609 00:16:29.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:29.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.557 issued rwts: total=30332,30391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.557 00:16:29.557 Run status group 0 (all jobs): 00:16:29.557 READ: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=118MiB (124MB), run=2001-2001msec 00:16:29.557 WRITE: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=119MiB (124MB), run=2001-2001msec 00:16:29.557 ----------------------------------------------------- 00:16:29.557 Suppressions used: 00:16:29.557 count bytes template 00:16:29.557 1 32 /usr/src/fio/parse.c 00:16:29.557 1 8 libtcmalloc_minimal.so 00:16:29.557 ----------------------------------------------------- 00:16:29.557 00:16:29.557 13:10:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:29.557 13:10:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:29.557 13:10:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:16:29.557 13:10:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:30.122 13:10:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:16:30.122 13:10:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:30.381 13:10:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:30.381 13:10:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:30.381 13:10:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:30.381 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:30.381 fio-3.35 00:16:30.381 Starting 1 thread 00:16:34.584 00:16:34.584 test: (groupid=0, jobs=1): err= 0: pid=66205: Fri Dec 6 13:10:40 2024 00:16:34.584 read: IOPS=16.3k, BW=63.7MiB/s (66.8MB/s)(127MiB/2001msec) 00:16:34.584 slat (usec): min=4, max=366, avg= 5.98, stdev= 3.23 00:16:34.584 clat (usec): min=228, max=9225, avg=3906.08, stdev=578.49 00:16:34.584 lat (usec): min=234, max=9237, avg=3912.06, stdev=579.10 00:16:34.584 clat percentiles (usec): 00:16:34.584 | 1.00th=[ 2769], 5.00th=[ 3261], 10.00th=[ 3425], 20.00th=[ 3556], 00:16:34.584 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3851], 00:16:34.584 | 70.00th=[ 3982], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4883], 00:16:34.584 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 8356], 99.95th=[ 8848], 00:16:34.584 | 99.99th=[ 9110] 00:16:34.584 bw ( KiB/s): min=61472, max=69736, per=100.00%, avg=65629.33, stdev=4132.23, samples=3 00:16:34.584 iops : min=15368, max=17434, avg=16407.33, stdev=1033.06, samples=3 00:16:34.584 write: IOPS=16.3k, BW=63.8MiB/s (66.9MB/s)(128MiB/2001msec); 0 zone resets 00:16:34.584 slat (nsec): min=4640, max=35996, avg=6062.83, stdev=1657.26 00:16:34.584 clat (usec): min=257, max=9219, avg=3909.14, stdev=578.83 00:16:34.584 lat (usec): min=263, max=9230, avg=3915.20, stdev=579.42 00:16:34.584 clat percentiles (usec): 00:16:34.584 | 1.00th=[ 2769], 5.00th=[ 3228], 10.00th=[ 3425], 20.00th=[ 3556], 00:16:34.584 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3851], 00:16:34.584 | 70.00th=[ 3982], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4883], 00:16:34.584 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 8225], 99.95th=[ 8455], 00:16:34.584 | 99.99th=[ 9241] 00:16:34.584 bw ( KiB/s): min=61048, max=69696, per=100.00%, avg=65445.33, stdev=4325.87, samples=3 00:16:34.584 iops : min=15262, max=17424, avg=16361.33, stdev=1081.47, samples=3 00:16:34.584 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:16:34.584 lat (msec) : 2=0.05%, 4=70.28%, 10=29.63% 00:16:34.584 cpu : usr=98.35%, sys=0.25%, ctx=27, majf=0, minf=608 00:16:34.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:34.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.584 issued rwts: total=32610,32691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.584 00:16:34.584 Run status group 0 (all jobs): 00:16:34.584 READ: bw=63.7MiB/s (66.8MB/s), 63.7MiB/s-63.7MiB/s (66.8MB/s-66.8MB/s), io=127MiB (134MB), run=2001-2001msec 00:16:34.584 WRITE: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s), io=128MiB (134MB), run=2001-2001msec 00:16:34.584 ----------------------------------------------------- 00:16:34.584 Suppressions used: 00:16:34.584 count bytes template 00:16:34.584 1 32 /usr/src/fio/parse.c 00:16:34.584 1 8 libtcmalloc_minimal.so 00:16:34.584 ----------------------------------------------------- 00:16:34.584 00:16:34.584 13:10:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:34.585 13:10:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:34.585 13:10:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:16:34.585 13:10:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:34.585 13:10:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:16:34.585 13:10:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:34.844 13:10:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:34.844 13:10:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:34.844 13:10:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:34.844 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:34.844 fio-3.35 00:16:34.844 Starting 1 thread 00:16:39.034 00:16:39.034 test: (groupid=0, jobs=1): err= 0: pid=66273: Fri Dec 6 13:10:45 2024 00:16:39.034 read: IOPS=14.5k, BW=56.6MiB/s (59.3MB/s)(113MiB/2001msec) 00:16:39.034 slat (usec): min=4, max=103, avg= 6.62, stdev= 2.41 00:16:39.034 clat (usec): min=286, max=11625, avg=4392.64, stdev=797.38 00:16:39.034 lat (usec): min=293, max=11631, avg=4399.25, stdev=798.37 00:16:39.034 clat percentiles (usec): 00:16:39.034 | 1.00th=[ 2835], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3687], 00:16:39.034 | 30.00th=[ 4015], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4555], 00:16:39.034 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5145], 95.00th=[ 5669], 00:16:39.034 | 99.00th=[ 7308], 99.50th=[ 7767], 99.90th=[10290], 99.95th=[10945], 00:16:39.034 | 99.99th=[11600] 00:16:39.034 bw ( KiB/s): min=51552, max=61104, per=96.03%, avg=55642.67, stdev=4921.30, samples=3 00:16:39.034 iops : min=12888, max=15276, avg=13910.67, stdev=1230.33, samples=3 00:16:39.034 write: IOPS=14.5k, BW=56.7MiB/s (59.4MB/s)(113MiB/2001msec); 0 zone resets 00:16:39.034 slat (nsec): min=4652, max=72259, avg=6791.92, stdev=2488.62 00:16:39.034 clat (usec): min=343, max=11619, avg=4403.17, stdev=800.90 00:16:39.034 lat (usec): min=351, max=11626, avg=4409.96, stdev=801.93 00:16:39.034 clat percentiles (usec): 00:16:39.034 | 1.00th=[ 2802], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3720], 00:16:39.034 | 30.00th=[ 4015], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4555], 00:16:39.034 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5145], 95.00th=[ 5669], 00:16:39.034 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[10159], 99.95th=[10814], 00:16:39.034 | 99.99th=[11338] 00:16:39.034 bw ( KiB/s): min=51768, max=61024, per=95.91%, avg=55656.00, stdev=4802.21, samples=3 00:16:39.034 iops : min=12942, max=15256, avg=13914.00, stdev=1200.55, samples=3 00:16:39.034 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:16:39.034 lat (msec) : 2=0.17%, 4=29.73%, 10=69.95%, 20=0.12% 00:16:39.034 cpu : usr=98.75%, sys=0.15%, ctx=5, majf=0, minf=606 00:16:39.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:39.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:39.034 issued rwts: total=28985,29030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:39.034 00:16:39.034 Run status group 0 (all jobs): 00:16:39.034 READ: bw=56.6MiB/s (59.3MB/s), 56.6MiB/s-56.6MiB/s (59.3MB/s-59.3MB/s), io=113MiB (119MB), run=2001-2001msec 00:16:39.034 WRITE: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=113MiB (119MB), run=2001-2001msec 00:16:39.034 ----------------------------------------------------- 00:16:39.034 Suppressions used: 00:16:39.034 count bytes template 00:16:39.034 1 32 /usr/src/fio/parse.c 00:16:39.034 1 8 libtcmalloc_minimal.so 00:16:39.034 ----------------------------------------------------- 00:16:39.034 00:16:39.034 13:10:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:39.034 13:10:45 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:16:39.034 00:16:39.034 real 0m17.434s 00:16:39.034 user 0m13.754s 00:16:39.034 sys 0m2.755s 00:16:39.034 ************************************ 00:16:39.034 END TEST nvme_fio 00:16:39.034 ************************************ 00:16:39.034 13:10:45 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.034 13:10:45 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:16:39.034 00:16:39.034 real 1m32.005s 00:16:39.034 user 3m48.064s 00:16:39.034 sys 0m14.790s 00:16:39.034 13:10:45 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.034 ************************************ 00:16:39.034 END TEST nvme 00:16:39.034 ************************************ 00:16:39.034 13:10:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.034 13:10:45 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:16:39.034 13:10:45 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:16:39.034 13:10:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:39.034 13:10:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.034 13:10:45 -- common/autotest_common.sh@10 -- # set +x 00:16:39.034 ************************************ 00:16:39.034 START TEST nvme_scc 00:16:39.035 ************************************ 00:16:39.035 13:10:45 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:16:39.292 * Looking for test storage... 00:16:39.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:39.292 13:10:45 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:39.292 13:10:45 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:39.292 13:10:45 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:39.292 13:10:45 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@345 -- # : 1 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@368 -- # return 0 00:16:39.292 13:10:45 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.292 13:10:45 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.292 --rc genhtml_branch_coverage=1 00:16:39.292 --rc genhtml_function_coverage=1 00:16:39.292 --rc genhtml_legend=1 00:16:39.292 --rc geninfo_all_blocks=1 00:16:39.292 --rc geninfo_unexecuted_blocks=1 00:16:39.292 00:16:39.292 ' 00:16:39.292 13:10:45 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.292 --rc genhtml_branch_coverage=1 00:16:39.292 --rc genhtml_function_coverage=1 00:16:39.292 --rc genhtml_legend=1 00:16:39.292 --rc geninfo_all_blocks=1 00:16:39.292 --rc geninfo_unexecuted_blocks=1 00:16:39.292 00:16:39.292 ' 00:16:39.292 13:10:45 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.292 --rc genhtml_branch_coverage=1 00:16:39.292 --rc genhtml_function_coverage=1 00:16:39.292 --rc genhtml_legend=1 00:16:39.292 --rc geninfo_all_blocks=1 00:16:39.292 --rc geninfo_unexecuted_blocks=1 00:16:39.292 00:16:39.292 ' 00:16:39.292 13:10:45 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.292 --rc genhtml_branch_coverage=1 00:16:39.292 --rc genhtml_function_coverage=1 00:16:39.292 --rc genhtml_legend=1 00:16:39.292 --rc geninfo_all_blocks=1 00:16:39.292 --rc geninfo_unexecuted_blocks=1 00:16:39.292 00:16:39.292 ' 00:16:39.292 13:10:45 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:39.292 13:10:45 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:39.292 13:10:45 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:16:39.292 13:10:45 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:39.292 13:10:45 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.292 13:10:45 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.292 13:10:45 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.293 13:10:45 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.293 13:10:45 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.293 13:10:45 nvme_scc -- paths/export.sh@5 -- # export PATH 00:16:39.293 13:10:45 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.293 13:10:45 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:16:39.293 13:10:45 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:16:39.293 13:10:45 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:16:39.293 13:10:45 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:16:39.293 13:10:45 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:16:39.293 13:10:45 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:16:39.293 13:10:45 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:16:39.293 13:10:45 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:16:39.293 13:10:45 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:16:39.293 13:10:45 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.293 13:10:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:16:39.293 13:10:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:16:39.293 13:10:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:16:39.293 13:10:45 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:39.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:39.808 Waiting for block devices as requested 00:16:39.808 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:39.808 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:40.066 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:40.066 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:45.343 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:45.343 13:10:51 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:16:45.343 13:10:51 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:45.343 13:10:51 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:45.344 13:10:51 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:45.344 13:10:51 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:45.344 13:10:51 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:45.344 13:10:51 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.344 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:45.345 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:16:45.346 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.347 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:45.348 13:10:51 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:45.348 13:10:51 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:45.348 13:10:51 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:45.348 13:10:51 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.348 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.349 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:16:45.350 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:45.351 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.615 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:45.616 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:45.617 13:10:51 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:45.617 13:10:51 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:45.617 13:10:51 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:45.617 13:10:51 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.617 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.618 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.619 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.620 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:16:45.621 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:45.622 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.623 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.624 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.886 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.887 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:45.888 13:10:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:45.888 13:10:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:45.888 13:10:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:45.888 13:10:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.888 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.889 13:10:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:45.890 13:10:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:16:45.890 13:10:52 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:16:45.890 13:10:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:16:45.890 13:10:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:16:45.890 13:10:52 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:46.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:47.067 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:47.067 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:47.067 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:47.067 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:47.067 13:10:53 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:47.067 13:10:53 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:47.067 13:10:53 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.067 13:10:53 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:47.067 ************************************ 00:16:47.067 START TEST nvme_simple_copy 00:16:47.067 ************************************ 00:16:47.067 13:10:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:47.325 Initializing NVMe Controllers 00:16:47.325 Attaching to 0000:00:10.0 00:16:47.325 Controller supports SCC. Attached to 0000:00:10.0 00:16:47.325 Namespace ID: 1 size: 6GB 00:16:47.325 Initialization complete. 00:16:47.325 00:16:47.325 Controller QEMU NVMe Ctrl (12340 ) 00:16:47.325 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:16:47.325 Namespace Block Size:4096 00:16:47.325 Writing LBAs 0 to 63 with Random Data 00:16:47.325 Copied LBAs from 0 - 63 to the Destination LBA 256 00:16:47.325 LBAs matching Written Data: 64 00:16:47.325 00:16:47.325 real 0m0.306s 00:16:47.325 user 0m0.124s 00:16:47.325 sys 0m0.081s 00:16:47.325 13:10:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.325 13:10:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:16:47.325 ************************************ 00:16:47.325 END TEST nvme_simple_copy 00:16:47.325 ************************************ 00:16:47.325 ************************************ 00:16:47.325 END TEST nvme_scc 00:16:47.325 ************************************ 00:16:47.325 00:16:47.325 real 0m8.345s 00:16:47.325 user 0m1.578s 00:16:47.325 sys 0m1.658s 00:16:47.325 13:10:53 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.325 13:10:53 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:47.582 13:10:53 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:16:47.582 13:10:53 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:16:47.582 13:10:53 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:16:47.582 13:10:53 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:16:47.582 13:10:53 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:16:47.582 13:10:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:47.582 13:10:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.582 13:10:53 -- common/autotest_common.sh@10 -- # set +x 00:16:47.582 ************************************ 00:16:47.582 START TEST nvme_fdp 00:16:47.582 ************************************ 00:16:47.582 13:10:53 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:16:47.582 * Looking for test storage... 00:16:47.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:47.582 13:10:53 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:47.582 13:10:53 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:16:47.582 13:10:53 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:47.582 13:10:54 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.582 13:10:54 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:16:47.582 13:10:54 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.582 13:10:54 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:47.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.582 --rc genhtml_branch_coverage=1 00:16:47.582 --rc genhtml_function_coverage=1 00:16:47.582 --rc genhtml_legend=1 00:16:47.582 --rc geninfo_all_blocks=1 00:16:47.582 --rc geninfo_unexecuted_blocks=1 00:16:47.582 00:16:47.582 ' 00:16:47.582 13:10:54 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:47.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.582 --rc genhtml_branch_coverage=1 00:16:47.582 --rc genhtml_function_coverage=1 00:16:47.582 --rc genhtml_legend=1 00:16:47.582 --rc geninfo_all_blocks=1 00:16:47.582 --rc geninfo_unexecuted_blocks=1 00:16:47.582 00:16:47.582 ' 00:16:47.582 13:10:54 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:47.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.582 --rc genhtml_branch_coverage=1 00:16:47.582 --rc genhtml_function_coverage=1 00:16:47.582 --rc genhtml_legend=1 00:16:47.582 --rc geninfo_all_blocks=1 00:16:47.582 --rc geninfo_unexecuted_blocks=1 00:16:47.582 00:16:47.582 ' 00:16:47.582 13:10:54 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:47.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.582 --rc genhtml_branch_coverage=1 00:16:47.582 --rc genhtml_function_coverage=1 00:16:47.582 --rc genhtml_legend=1 00:16:47.582 --rc geninfo_all_blocks=1 00:16:47.582 --rc geninfo_unexecuted_blocks=1 00:16:47.582 00:16:47.582 ' 00:16:47.582 13:10:54 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.583 13:10:54 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:16:47.583 13:10:54 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.583 13:10:54 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.583 13:10:54 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.583 13:10:54 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.583 13:10:54 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.583 13:10:54 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.583 13:10:54 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:16:47.583 13:10:54 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:16:47.583 13:10:54 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:16:47.583 13:10:54 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.583 13:10:54 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:48.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:48.148 Waiting for block devices as requested 00:16:48.148 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:48.405 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:48.405 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:48.405 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:53.684 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:53.684 13:10:59 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:53.684 13:10:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:53.684 13:10:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:53.684 13:10:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:53.684 13:10:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:53.684 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:10:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.685 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.686 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.687 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:16:53.688 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.689 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:53.690 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:53.691 13:11:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:53.691 13:11:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:53.691 13:11:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:53.691 13:11:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:53.691 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:53.692 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:53.693 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:53.694 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.695 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.696 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:53.962 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.963 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:53.964 13:11:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:53.964 13:11:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:53.964 13:11:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:53.964 13:11:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:53.964 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.965 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:53.966 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.967 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.968 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.969 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.970 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.971 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:53.972 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.973 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:53.974 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.975 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.976 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:53.977 13:11:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:53.977 13:11:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:53.977 13:11:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:53.977 13:11:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:53.977 13:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:54.237 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.238 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:54.239 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:54.240 13:11:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:16:54.240 13:11:00 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:16:54.240 13:11:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:16:54.240 13:11:00 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:16:54.240 13:11:00 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:54.499 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:55.066 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:55.066 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:55.066 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:55.324 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:55.324 13:11:01 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:55.324 13:11:01 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:55.324 13:11:01 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.324 13:11:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:55.324 ************************************ 00:16:55.324 START TEST nvme_flexible_data_placement 00:16:55.324 ************************************ 00:16:55.324 13:11:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:55.583 Initializing NVMe Controllers 00:16:55.583 Attaching to 0000:00:13.0 00:16:55.583 Controller supports FDP Attached to 0000:00:13.0 00:16:55.583 Namespace ID: 1 Endurance Group ID: 1 00:16:55.583 Initialization complete. 00:16:55.583 00:16:55.583 ================================== 00:16:55.583 == FDP tests for Namespace: #01 == 00:16:55.583 ================================== 00:16:55.583 00:16:55.583 Get Feature: FDP: 00:16:55.583 ================= 00:16:55.583 Enabled: Yes 00:16:55.583 FDP configuration Index: 0 00:16:55.583 00:16:55.583 FDP configurations log page 00:16:55.583 =========================== 00:16:55.583 Number of FDP configurations: 1 00:16:55.583 Version: 0 00:16:55.583 Size: 112 00:16:55.583 FDP Configuration Descriptor: 0 00:16:55.583 Descriptor Size: 96 00:16:55.583 Reclaim Group Identifier format: 2 00:16:55.583 FDP Volatile Write Cache: Not Present 00:16:55.583 FDP Configuration: Valid 00:16:55.583 Vendor Specific Size: 0 00:16:55.583 Number of Reclaim Groups: 2 00:16:55.583 Number of Recalim Unit Handles: 8 00:16:55.583 Max Placement Identifiers: 128 00:16:55.583 Number of Namespaces Suppprted: 256 00:16:55.583 Reclaim unit Nominal Size: 6000000 bytes 00:16:55.583 Estimated Reclaim Unit Time Limit: Not Reported 00:16:55.583 RUH Desc #000: RUH Type: Initially Isolated 00:16:55.584 RUH Desc #001: RUH Type: Initially Isolated 00:16:55.584 RUH Desc #002: RUH Type: Initially Isolated 00:16:55.584 RUH Desc #003: RUH Type: Initially Isolated 00:16:55.584 RUH Desc #004: RUH Type: Initially Isolated 00:16:55.584 RUH Desc #005: RUH Type: Initially Isolated 00:16:55.584 RUH Desc #006: RUH Type: Initially Isolated 00:16:55.584 RUH Desc #007: RUH Type: Initially Isolated 00:16:55.584 00:16:55.584 FDP reclaim unit handle usage log page 00:16:55.584 ====================================== 00:16:55.584 Number of Reclaim Unit Handles: 8 00:16:55.584 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:16:55.584 RUH Usage Desc #001: RUH Attributes: Unused 00:16:55.584 RUH Usage Desc #002: RUH Attributes: Unused 00:16:55.584 RUH Usage Desc #003: RUH Attributes: Unused 00:16:55.584 RUH Usage Desc #004: RUH Attributes: Unused 00:16:55.584 RUH Usage Desc #005: RUH Attributes: Unused 00:16:55.584 RUH Usage Desc #006: RUH Attributes: Unused 00:16:55.584 RUH Usage Desc #007: RUH Attributes: Unused 00:16:55.584 00:16:55.584 FDP statistics log page 00:16:55.584 ======================= 00:16:55.584 Host bytes with metadata written: 763957248 00:16:55.584 Media bytes with metadata written: 764096512 00:16:55.584 Media bytes erased: 0 00:16:55.584 00:16:55.584 FDP Reclaim unit handle status 00:16:55.584 ============================== 00:16:55.584 Number of RUHS descriptors: 2 00:16:55.584 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000276f 00:16:55.584 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:16:55.584 00:16:55.584 FDP write on placement id: 0 success 00:16:55.584 00:16:55.584 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:16:55.584 00:16:55.584 IO mgmt send: RUH update for Placement ID: #0 Success 00:16:55.584 00:16:55.584 Get Feature: FDP Events for Placement handle: #0 00:16:55.584 ======================== 00:16:55.584 Number of FDP Events: 6 00:16:55.584 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:16:55.584 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:16:55.584 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:16:55.584 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:16:55.584 FDP Event: #4 Type: Media Reallocated Enabled: No 00:16:55.584 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:16:55.584 00:16:55.584 FDP events log page 00:16:55.584 =================== 00:16:55.584 Number of FDP events: 1 00:16:55.584 FDP Event #0: 00:16:55.584 Event Type: RU Not Written to Capacity 00:16:55.584 Placement Identifier: Valid 00:16:55.584 NSID: Valid 00:16:55.584 Location: Valid 00:16:55.584 Placement Identifier: 0 00:16:55.584 Event Timestamp: 7 00:16:55.584 Namespace Identifier: 1 00:16:55.584 Reclaim Group Identifier: 0 00:16:55.584 Reclaim Unit Handle Identifier: 0 00:16:55.584 00:16:55.584 FDP test passed 00:16:55.584 00:16:55.584 real 0m0.279s 00:16:55.584 user 0m0.099s 00:16:55.584 sys 0m0.079s 00:16:55.584 13:11:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.584 13:11:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:16:55.584 ************************************ 00:16:55.584 END TEST nvme_flexible_data_placement 00:16:55.584 ************************************ 00:16:55.584 00:16:55.584 real 0m8.114s 00:16:55.584 user 0m1.451s 00:16:55.584 sys 0m1.671s 00:16:55.584 13:11:01 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.584 13:11:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:55.584 ************************************ 00:16:55.584 END TEST nvme_fdp 00:16:55.584 ************************************ 00:16:55.584 13:11:02 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:16:55.584 13:11:02 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:55.584 13:11:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:55.584 13:11:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.584 13:11:02 -- common/autotest_common.sh@10 -- # set +x 00:16:55.584 ************************************ 00:16:55.584 START TEST nvme_rpc 00:16:55.584 ************************************ 00:16:55.584 13:11:02 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:55.843 * Looking for test storage... 00:16:55.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:55.843 13:11:02 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:55.843 13:11:02 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:55.843 13:11:02 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:55.843 13:11:02 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.843 13:11:02 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:16:55.843 13:11:02 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.843 13:11:02 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:55.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.843 --rc genhtml_branch_coverage=1 00:16:55.843 --rc genhtml_function_coverage=1 00:16:55.843 --rc genhtml_legend=1 00:16:55.843 --rc geninfo_all_blocks=1 00:16:55.843 --rc geninfo_unexecuted_blocks=1 00:16:55.843 00:16:55.843 ' 00:16:55.843 13:11:02 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:55.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.843 --rc genhtml_branch_coverage=1 00:16:55.843 --rc genhtml_function_coverage=1 00:16:55.843 --rc genhtml_legend=1 00:16:55.843 --rc geninfo_all_blocks=1 00:16:55.843 --rc geninfo_unexecuted_blocks=1 00:16:55.843 00:16:55.843 ' 00:16:55.843 13:11:02 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:55.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.843 --rc genhtml_branch_coverage=1 00:16:55.843 --rc genhtml_function_coverage=1 00:16:55.843 --rc genhtml_legend=1 00:16:55.843 --rc geninfo_all_blocks=1 00:16:55.843 --rc geninfo_unexecuted_blocks=1 00:16:55.843 00:16:55.843 ' 00:16:55.843 13:11:02 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:55.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.843 --rc genhtml_branch_coverage=1 00:16:55.843 --rc genhtml_function_coverage=1 00:16:55.843 --rc genhtml_legend=1 00:16:55.843 --rc geninfo_all_blocks=1 00:16:55.843 --rc geninfo_unexecuted_blocks=1 00:16:55.843 00:16:55.843 ' 00:16:55.844 13:11:02 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.844 13:11:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:16:55.844 13:11:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:16:55.844 13:11:02 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67659 00:16:55.844 13:11:02 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:55.844 13:11:02 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:16:55.844 13:11:02 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67659 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67659 ']' 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.844 13:11:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.103 [2024-12-06 13:11:02.409773] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:56.103 [2024-12-06 13:11:02.410517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67659 ] 00:16:56.103 [2024-12-06 13:11:02.591896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:56.361 [2024-12-06 13:11:02.697184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.361 [2024-12-06 13:11:02.697198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.298 13:11:03 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.298 13:11:03 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:57.298 13:11:03 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:16:57.556 Nvme0n1 00:16:57.556 13:11:03 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:16:57.556 13:11:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:16:57.815 request: 00:16:57.815 { 00:16:57.815 "bdev_name": "Nvme0n1", 00:16:57.815 "filename": "non_existing_file", 00:16:57.815 "method": "bdev_nvme_apply_firmware", 00:16:57.815 "req_id": 1 00:16:57.815 } 00:16:57.815 Got JSON-RPC error response 00:16:57.815 response: 00:16:57.815 { 00:16:57.815 "code": -32603, 00:16:57.815 "message": "open file failed." 00:16:57.815 } 00:16:57.815 13:11:04 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:16:57.815 13:11:04 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:16:57.815 13:11:04 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:58.074 13:11:04 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:58.074 13:11:04 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67659 00:16:58.074 13:11:04 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67659 ']' 00:16:58.074 13:11:04 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67659 00:16:58.074 13:11:04 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:58.074 13:11:04 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.074 13:11:04 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67659 00:16:58.074 killing process with pid 67659 00:16:58.074 13:11:04 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.074 13:11:04 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.074 13:11:04 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67659' 00:16:58.074 13:11:04 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67659 00:16:58.074 13:11:04 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67659 00:17:00.608 00:17:00.608 real 0m4.460s 00:17:00.608 user 0m8.777s 00:17:00.608 sys 0m0.595s 00:17:00.608 13:11:06 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.608 13:11:06 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.608 ************************************ 00:17:00.608 END TEST nvme_rpc 00:17:00.608 ************************************ 00:17:00.608 13:11:06 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:17:00.608 13:11:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.608 13:11:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.608 13:11:06 -- common/autotest_common.sh@10 -- # set +x 00:17:00.608 ************************************ 00:17:00.608 START TEST nvme_rpc_timeouts 00:17:00.608 ************************************ 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:17:00.608 * Looking for test storage... 00:17:00.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.608 13:11:06 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:00.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.608 --rc genhtml_branch_coverage=1 00:17:00.608 --rc genhtml_function_coverage=1 00:17:00.608 --rc genhtml_legend=1 00:17:00.608 --rc geninfo_all_blocks=1 00:17:00.608 --rc geninfo_unexecuted_blocks=1 00:17:00.608 00:17:00.608 ' 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:00.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.608 --rc genhtml_branch_coverage=1 00:17:00.608 --rc genhtml_function_coverage=1 00:17:00.608 --rc genhtml_legend=1 00:17:00.608 --rc geninfo_all_blocks=1 00:17:00.608 --rc geninfo_unexecuted_blocks=1 00:17:00.608 00:17:00.608 ' 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:00.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.608 --rc genhtml_branch_coverage=1 00:17:00.608 --rc genhtml_function_coverage=1 00:17:00.608 --rc genhtml_legend=1 00:17:00.608 --rc geninfo_all_blocks=1 00:17:00.608 --rc geninfo_unexecuted_blocks=1 00:17:00.608 00:17:00.608 ' 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:00.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.608 --rc genhtml_branch_coverage=1 00:17:00.608 --rc genhtml_function_coverage=1 00:17:00.608 --rc genhtml_legend=1 00:17:00.608 --rc geninfo_all_blocks=1 00:17:00.608 --rc geninfo_unexecuted_blocks=1 00:17:00.608 00:17:00.608 ' 00:17:00.608 13:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.608 13:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67735 00:17:00.608 13:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67735 00:17:00.608 13:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67773 00:17:00.608 13:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:17:00.608 13:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:17:00.608 13:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67773 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67773 ']' 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.608 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.609 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.609 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.609 13:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:17:00.609 [2024-12-06 13:11:06.860427] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:00.609 [2024-12-06 13:11:06.860611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67773 ] 00:17:00.609 [2024-12-06 13:11:07.042464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:00.868 [2024-12-06 13:11:07.163241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.868 [2024-12-06 13:11:07.163252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.801 13:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.801 Checking default timeout settings: 00:17:01.801 13:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:17:01.801 13:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:17:01.801 13:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:02.058 13:11:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:17:02.058 Making settings changes with rpc: 00:17:02.058 13:11:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:17:02.316 Check default vs. modified settings: 00:17:02.316 13:11:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:17:02.316 13:11:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67735 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67735 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:02.888 Setting action_on_timeout is changed as expected. 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67735 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67735 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:02.888 Setting timeout_us is changed as expected. 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67735 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67735 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:17:02.888 Setting timeout_admin_us is changed as expected. 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67735 /tmp/settings_modified_67735 00:17:02.888 13:11:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67773 00:17:02.888 13:11:09 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67773 ']' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67773 00:17:02.888 13:11:09 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:17:02.888 13:11:09 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67773 00:17:02.888 killing process with pid 67773 00:17:02.888 13:11:09 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.888 13:11:09 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67773' 00:17:02.888 13:11:09 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67773 00:17:02.888 13:11:09 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67773 00:17:05.418 RPC TIMEOUT SETTING TEST PASSED. 00:17:05.418 13:11:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:17:05.418 00:17:05.418 real 0m4.774s 00:17:05.418 user 0m9.527s 00:17:05.418 sys 0m0.617s 00:17:05.418 ************************************ 00:17:05.418 END TEST nvme_rpc_timeouts 00:17:05.418 ************************************ 00:17:05.419 13:11:11 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.419 13:11:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:17:05.419 13:11:11 -- spdk/autotest.sh@239 -- # uname -s 00:17:05.419 13:11:11 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:17:05.419 13:11:11 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:17:05.419 13:11:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:05.419 13:11:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.419 13:11:11 -- common/autotest_common.sh@10 -- # set +x 00:17:05.419 ************************************ 00:17:05.419 START TEST sw_hotplug 00:17:05.419 ************************************ 00:17:05.419 13:11:11 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:17:05.419 * Looking for test storage... 00:17:05.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:05.419 13:11:11 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:05.419 13:11:11 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:17:05.419 13:11:11 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:05.419 13:11:11 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.419 13:11:11 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:17:05.419 13:11:11 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.419 13:11:11 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:05.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.419 --rc genhtml_branch_coverage=1 00:17:05.419 --rc genhtml_function_coverage=1 00:17:05.419 --rc genhtml_legend=1 00:17:05.419 --rc geninfo_all_blocks=1 00:17:05.419 --rc geninfo_unexecuted_blocks=1 00:17:05.419 00:17:05.419 ' 00:17:05.419 13:11:11 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:05.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.419 --rc genhtml_branch_coverage=1 00:17:05.419 --rc genhtml_function_coverage=1 00:17:05.419 --rc genhtml_legend=1 00:17:05.419 --rc geninfo_all_blocks=1 00:17:05.419 --rc geninfo_unexecuted_blocks=1 00:17:05.419 00:17:05.419 ' 00:17:05.419 13:11:11 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:05.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.419 --rc genhtml_branch_coverage=1 00:17:05.419 --rc genhtml_function_coverage=1 00:17:05.419 --rc genhtml_legend=1 00:17:05.419 --rc geninfo_all_blocks=1 00:17:05.419 --rc geninfo_unexecuted_blocks=1 00:17:05.419 00:17:05.419 ' 00:17:05.419 13:11:11 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:05.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.419 --rc genhtml_branch_coverage=1 00:17:05.419 --rc genhtml_function_coverage=1 00:17:05.419 --rc genhtml_legend=1 00:17:05.419 --rc geninfo_all_blocks=1 00:17:05.419 --rc geninfo_unexecuted_blocks=1 00:17:05.419 00:17:05.419 ' 00:17:05.419 13:11:11 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:05.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:05.678 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:05.678 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:05.678 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:05.678 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:05.678 13:11:12 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:17:05.678 13:11:12 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:17:05.678 13:11:12 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:17:05.678 13:11:12 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@233 -- # local class 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:17:05.678 13:11:12 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:05.678 13:11:12 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:17:05.678 13:11:12 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:17:05.678 13:11:12 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:05.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:06.195 Waiting for block devices as requested 00:17:06.195 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:06.195 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:06.452 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:06.452 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:11.722 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:11.722 13:11:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:17:11.722 13:11:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:11.981 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:17:11.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:11.981 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:17:12.239 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:17:12.498 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:12.498 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:12.498 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:17:12.498 13:11:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:12.756 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:17:12.756 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:17:12.756 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68642 00:17:12.756 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:17:12.756 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:17:12.757 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:12.757 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:17:12.757 13:11:19 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:17:12.757 13:11:19 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:17:12.757 13:11:19 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:17:12.757 13:11:19 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:17:12.757 13:11:19 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:17:12.757 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:12.757 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:12.757 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:17:12.757 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:12.757 13:11:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:13.015 Initializing NVMe Controllers 00:17:13.015 Attaching to 0000:00:10.0 00:17:13.015 Attaching to 0000:00:11.0 00:17:13.015 Attached to 0000:00:10.0 00:17:13.015 Attached to 0000:00:11.0 00:17:13.015 Initialization complete. Starting I/O... 00:17:13.015 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:17:13.015 QEMU NVMe Ctrl (12341 ): 1 I/Os completed (+1) 00:17:13.015 00:17:13.953 QEMU NVMe Ctrl (12340 ): 1087 I/Os completed (+1087) 00:17:13.953 QEMU NVMe Ctrl (12341 ): 1196 I/Os completed (+1195) 00:17:13.953 00:17:14.950 QEMU NVMe Ctrl (12340 ): 2330 I/Os completed (+1243) 00:17:14.950 QEMU NVMe Ctrl (12341 ): 2595 I/Os completed (+1399) 00:17:14.950 00:17:15.887 QEMU NVMe Ctrl (12340 ): 4010 I/Os completed (+1680) 00:17:15.887 QEMU NVMe Ctrl (12341 ): 4393 I/Os completed (+1798) 00:17:15.887 00:17:17.264 QEMU NVMe Ctrl (12340 ): 5820 I/Os completed (+1810) 00:17:17.264 QEMU NVMe Ctrl (12341 ): 6417 I/Os completed (+2024) 00:17:17.264 00:17:18.199 QEMU NVMe Ctrl (12340 ): 7622 I/Os completed (+1802) 00:17:18.199 QEMU NVMe Ctrl (12341 ): 8345 I/Os completed (+1928) 00:17:18.199 00:17:18.764 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:18.764 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:18.764 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:18.764 [2024-12-06 13:11:25.125957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:18.764 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:18.764 [2024-12-06 13:11:25.129016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.129139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.129194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.129246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:18.764 [2024-12-06 13:11:25.132793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.132891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.132925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.132952] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:18.764 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:18.764 [2024-12-06 13:11:25.149790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:18.764 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:18.764 [2024-12-06 13:11:25.152052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.152131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.152170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.152198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:18.764 [2024-12-06 13:11:25.155372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.155440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.155473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 [2024-12-06 13:11:25.155497] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.764 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:18.764 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:18.764 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:18.764 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:18.764 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:19.021 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:19.021 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:19.021 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:19.021 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:19.021 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:19.021 Attaching to 0000:00:10.0 00:17:19.021 Attached to 0000:00:10.0 00:17:19.021 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:17:19.021 00:17:19.021 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:19.021 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:19.021 13:11:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:19.021 Attaching to 0000:00:11.0 00:17:19.021 Attached to 0000:00:11.0 00:17:19.954 QEMU NVMe Ctrl (12340 ): 1578 I/Os completed (+1578) 00:17:19.954 QEMU NVMe Ctrl (12341 ): 1696 I/Os completed (+1696) 00:17:19.954 00:17:20.887 QEMU NVMe Ctrl (12340 ): 3247 I/Os completed (+1669) 00:17:20.887 QEMU NVMe Ctrl (12341 ): 3455 I/Os completed (+1759) 00:17:20.887 00:17:22.274 QEMU NVMe Ctrl (12340 ): 4888 I/Os completed (+1641) 00:17:22.274 QEMU NVMe Ctrl (12341 ): 5247 I/Os completed (+1792) 00:17:22.274 00:17:22.843 QEMU NVMe Ctrl (12340 ): 6548 I/Os completed (+1660) 00:17:22.843 QEMU NVMe Ctrl (12341 ): 7003 I/Os completed (+1756) 00:17:22.843 00:17:24.219 QEMU NVMe Ctrl (12340 ): 8121 I/Os completed (+1573) 00:17:24.220 QEMU NVMe Ctrl (12341 ): 8833 I/Os completed (+1830) 00:17:24.220 00:17:25.156 QEMU NVMe Ctrl (12340 ): 9616 I/Os completed (+1495) 00:17:25.156 QEMU NVMe Ctrl (12341 ): 10524 I/Os completed (+1691) 00:17:25.156 00:17:26.090 QEMU NVMe Ctrl (12340 ): 11256 I/Os completed (+1640) 00:17:26.090 QEMU NVMe Ctrl (12341 ): 12309 I/Os completed (+1785) 00:17:26.090 00:17:27.025 QEMU NVMe Ctrl (12340 ): 12998 I/Os completed (+1742) 00:17:27.025 QEMU NVMe Ctrl (12341 ): 14077 I/Os completed (+1768) 00:17:27.025 00:17:27.961 QEMU NVMe Ctrl (12340 ): 14540 I/Os completed (+1542) 00:17:27.961 QEMU NVMe Ctrl (12341 ): 15775 I/Os completed (+1698) 00:17:27.961 00:17:28.895 QEMU NVMe Ctrl (12340 ): 16117 I/Os completed (+1577) 00:17:28.895 QEMU NVMe Ctrl (12341 ): 17505 I/Os completed (+1730) 00:17:28.895 00:17:30.272 QEMU NVMe Ctrl (12340 ): 17636 I/Os completed (+1519) 00:17:30.272 QEMU NVMe Ctrl (12341 ): 19204 I/Os completed (+1699) 00:17:30.272 00:17:31.209 QEMU NVMe Ctrl (12340 ): 19141 I/Os completed (+1505) 00:17:31.209 QEMU NVMe Ctrl (12341 ): 20924 I/Os completed (+1720) 00:17:31.209 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:31.209 [2024-12-06 13:11:37.455715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:31.209 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:31.209 [2024-12-06 13:11:37.457574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.457643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.457672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.457697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:31.209 [2024-12-06 13:11:37.460559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.460617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.460641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.460662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:31.209 [2024-12-06 13:11:37.481775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:31.209 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:31.209 [2024-12-06 13:11:37.483552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.483638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.483669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.483692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:31.209 [2024-12-06 13:11:37.486317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.486382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.486409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 [2024-12-06 13:11:37.486430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:31.209 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:31.209 Attaching to 0000:00:10.0 00:17:31.209 Attached to 0000:00:10.0 00:17:31.469 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:31.469 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.469 13:11:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:31.469 Attaching to 0000:00:11.0 00:17:31.469 Attached to 0000:00:11.0 00:17:32.035 QEMU NVMe Ctrl (12340 ): 1055 I/Os completed (+1055) 00:17:32.035 QEMU NVMe Ctrl (12341 ): 1080 I/Os completed (+1080) 00:17:32.035 00:17:32.981 QEMU NVMe Ctrl (12340 ): 2824 I/Os completed (+1769) 00:17:32.981 QEMU NVMe Ctrl (12341 ): 3046 I/Os completed (+1966) 00:17:32.981 00:17:33.912 QEMU NVMe Ctrl (12340 ): 4566 I/Os completed (+1742) 00:17:33.912 QEMU NVMe Ctrl (12341 ): 4993 I/Os completed (+1947) 00:17:33.912 00:17:34.846 QEMU NVMe Ctrl (12340 ): 6118 I/Os completed (+1552) 00:17:34.846 QEMU NVMe Ctrl (12341 ): 6730 I/Os completed (+1737) 00:17:34.846 00:17:36.218 QEMU NVMe Ctrl (12340 ): 7730 I/Os completed (+1612) 00:17:36.218 QEMU NVMe Ctrl (12341 ): 8465 I/Os completed (+1735) 00:17:36.218 00:17:37.154 QEMU NVMe Ctrl (12340 ): 9315 I/Os completed (+1585) 00:17:37.154 QEMU NVMe Ctrl (12341 ): 10198 I/Os completed (+1733) 00:17:37.154 00:17:38.090 QEMU NVMe Ctrl (12340 ): 10860 I/Os completed (+1545) 00:17:38.090 QEMU NVMe Ctrl (12341 ): 11908 I/Os completed (+1710) 00:17:38.090 00:17:39.025 QEMU NVMe Ctrl (12340 ): 12548 I/Os completed (+1688) 00:17:39.025 QEMU NVMe Ctrl (12341 ): 13658 I/Os completed (+1750) 00:17:39.025 00:17:39.958 QEMU NVMe Ctrl (12340 ): 14127 I/Os completed (+1579) 00:17:39.958 QEMU NVMe Ctrl (12341 ): 15474 I/Os completed (+1816) 00:17:39.958 00:17:40.898 QEMU NVMe Ctrl (12340 ): 15779 I/Os completed (+1652) 00:17:40.898 QEMU NVMe Ctrl (12341 ): 17293 I/Os completed (+1819) 00:17:40.898 00:17:42.272 QEMU NVMe Ctrl (12340 ): 17397 I/Os completed (+1618) 00:17:42.272 QEMU NVMe Ctrl (12341 ): 19014 I/Os completed (+1721) 00:17:42.272 00:17:42.844 QEMU NVMe Ctrl (12340 ): 18987 I/Os completed (+1590) 00:17:42.844 QEMU NVMe Ctrl (12341 ): 20774 I/Os completed (+1760) 00:17:42.844 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:43.412 [2024-12-06 13:11:49.764539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:43.412 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:43.412 [2024-12-06 13:11:49.766829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.766924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.766959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.766989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:43.412 [2024-12-06 13:11:49.770534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.770607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.770636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.770661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:43.412 [2024-12-06 13:11:49.788899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:43.412 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:43.412 [2024-12-06 13:11:49.791021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.791106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.791146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.791175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:43.412 [2024-12-06 13:11:49.794247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.794309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.794343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 [2024-12-06 13:11:49.794366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:43.412 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:43.670 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:43.670 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:43.670 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:43.670 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:43.670 13:11:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:43.670 Attaching to 0000:00:10.0 00:17:43.670 Attached to 0000:00:10.0 00:17:43.670 13:11:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:43.670 13:11:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:43.670 13:11:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:43.670 Attaching to 0000:00:11.0 00:17:43.670 Attached to 0000:00:11.0 00:17:43.670 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:43.670 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:43.670 [2024-12-06 13:11:50.072043] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:17:55.881 13:12:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:55.881 13:12:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:55.881 13:12:02 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.94 00:17:55.881 13:12:02 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.94 00:17:55.881 13:12:02 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:17:55.881 13:12:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.94 00:17:55.881 13:12:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.94 2 00:17:55.881 remove_attach_helper took 42.94s to complete (handling 2 nvme drive(s)) 13:12:02 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:18:02.444 13:12:08 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68642 00:18:02.444 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68642) - No such process 00:18:02.444 13:12:08 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68642 00:18:02.444 13:12:08 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:18:02.444 13:12:08 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:18:02.444 13:12:08 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:18:02.444 13:12:08 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69180 00:18:02.444 13:12:08 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.444 13:12:08 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:18:02.444 13:12:08 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69180 00:18:02.444 13:12:08 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69180 ']' 00:18:02.444 13:12:08 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.444 13:12:08 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.444 13:12:08 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.444 13:12:08 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.444 13:12:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:02.444 [2024-12-06 13:12:08.196936] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:18:02.444 [2024-12-06 13:12:08.197134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69180 ] 00:18:02.444 [2024-12-06 13:12:08.380637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.444 [2024-12-06 13:12:08.506214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.011 13:12:09 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.011 13:12:09 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:18:03.011 13:12:09 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:18:03.011 13:12:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.011 13:12:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:03.011 13:12:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.011 13:12:09 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:18:03.011 13:12:09 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:18:03.011 13:12:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:18:03.011 13:12:09 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:18:03.011 13:12:09 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:18:03.011 13:12:09 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:18:03.011 13:12:09 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:18:03.011 13:12:09 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:18:03.011 13:12:09 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:18:03.011 13:12:09 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:18:03.011 13:12:09 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:18:03.011 13:12:09 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:18:03.011 13:12:09 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:18:09.565 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:09.565 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:09.565 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:09.565 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:09.565 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:09.565 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:09.565 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:09.566 13:12:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.566 13:12:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:09.566 [2024-12-06 13:12:15.382859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:09.566 [2024-12-06 13:12:15.385584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:09.566 [2024-12-06 13:12:15.385643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.566 [2024-12-06 13:12:15.385669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.566 [2024-12-06 13:12:15.385700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:09.566 [2024-12-06 13:12:15.385716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.566 [2024-12-06 13:12:15.385733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.566 [2024-12-06 13:12:15.385749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:09.566 [2024-12-06 13:12:15.385765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.566 [2024-12-06 13:12:15.385779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.566 [2024-12-06 13:12:15.385800] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:09.566 [2024-12-06 13:12:15.385814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.566 [2024-12-06 13:12:15.385830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.566 13:12:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:09.566 13:12:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.566 13:12:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:09.566 13:12:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:09.566 13:12:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:09.566 [2024-12-06 13:12:16.082862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:09.566 [2024-12-06 13:12:16.085643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:09.566 [2024-12-06 13:12:16.085693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.566 [2024-12-06 13:12:16.085719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.566 [2024-12-06 13:12:16.085747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:09.566 [2024-12-06 13:12:16.085765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.566 [2024-12-06 13:12:16.085780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.566 [2024-12-06 13:12:16.085797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:09.566 [2024-12-06 13:12:16.085811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.566 [2024-12-06 13:12:16.085827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.566 [2024-12-06 13:12:16.085855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:09.566 [2024-12-06 13:12:16.085876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.566 [2024-12-06 13:12:16.085890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:10.132 13:12:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.132 13:12:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:10.132 13:12:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:10.132 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:10.390 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:10.390 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:10.390 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:10.390 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:10.390 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:10.390 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:10.390 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:10.390 13:12:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:22.639 13:12:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.639 13:12:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:22.639 13:12:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:22.639 [2024-12-06 13:12:28.883047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:22.639 [2024-12-06 13:12:28.886264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:22.639 [2024-12-06 13:12:28.886323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.639 [2024-12-06 13:12:28.886346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.639 [2024-12-06 13:12:28.886376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:22.639 [2024-12-06 13:12:28.886392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.639 [2024-12-06 13:12:28.886409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.639 [2024-12-06 13:12:28.886425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:22.639 [2024-12-06 13:12:28.886441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.639 [2024-12-06 13:12:28.886455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.639 [2024-12-06 13:12:28.886471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:22.639 [2024-12-06 13:12:28.886485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.639 [2024-12-06 13:12:28.886501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:22.639 13:12:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:22.639 13:12:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:22.639 13:12:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:22.639 13:12:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:23.221 13:12:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:23.221 13:12:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:23.221 13:12:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:23.221 13:12:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:23.221 13:12:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:23.221 13:12:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:23.221 13:12:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.221 13:12:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:23.221 13:12:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.221 13:12:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:23.221 13:12:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:23.221 [2024-12-06 13:12:29.583485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:23.221 [2024-12-06 13:12:29.586454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.221 [2024-12-06 13:12:29.586507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.221 [2024-12-06 13:12:29.586535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.221 [2024-12-06 13:12:29.586561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.221 [2024-12-06 13:12:29.586578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.221 [2024-12-06 13:12:29.586592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.221 [2024-12-06 13:12:29.586610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.221 [2024-12-06 13:12:29.586624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.221 [2024-12-06 13:12:29.586640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.221 [2024-12-06 13:12:29.586654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.221 [2024-12-06 13:12:29.586669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.221 [2024-12-06 13:12:29.586684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:23.788 13:12:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.788 13:12:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:23.788 13:12:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:23.788 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:24.047 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:24.047 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:24.047 13:12:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:36.269 13:12:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.269 13:12:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:36.269 13:12:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:36.269 13:12:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.269 13:12:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:36.269 [2024-12-06 13:12:42.484400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:36.269 [2024-12-06 13:12:42.487626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:36.269 [2024-12-06 13:12:42.487687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.269 [2024-12-06 13:12:42.487710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.269 [2024-12-06 13:12:42.487742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:36.269 [2024-12-06 13:12:42.487759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.269 [2024-12-06 13:12:42.487785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.269 [2024-12-06 13:12:42.487801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:36.269 [2024-12-06 13:12:42.487821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.269 [2024-12-06 13:12:42.487836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.269 [2024-12-06 13:12:42.487875] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:36.269 [2024-12-06 13:12:42.487892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.269 [2024-12-06 13:12:42.487912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.269 13:12:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:36.269 13:12:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:36.527 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:36.527 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:36.527 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:36.527 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:36.527 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:36.527 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:36.527 13:12:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.527 13:12:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:36.527 13:12:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.786 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:36.786 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:36.786 [2024-12-06 13:12:43.184426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:36.786 [2024-12-06 13:12:43.187434] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:36.786 [2024-12-06 13:12:43.187485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.786 [2024-12-06 13:12:43.187516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.786 [2024-12-06 13:12:43.187544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:36.786 [2024-12-06 13:12:43.187566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.786 [2024-12-06 13:12:43.187582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.786 [2024-12-06 13:12:43.187605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:36.786 [2024-12-06 13:12:43.187620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.786 [2024-12-06 13:12:43.187645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.786 [2024-12-06 13:12:43.187661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:36.786 [2024-12-06 13:12:43.187681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.786 [2024-12-06 13:12:43.187696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:37.352 13:12:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.352 13:12:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:37.352 13:12:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:37.352 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:37.610 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:37.610 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:37.610 13:12:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:49.817 13:12:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:49.817 13:12:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:49.817 13:12:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:49.817 13:12:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:49.817 13:12:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:49.817 13:12:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:49.817 13:12:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.818 13:12:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:49.818 13:12:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.818 13:12:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.70 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.70 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.70 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.70 2 00:18:49.818 remove_attach_helper took 46.70s to complete (handling 2 nvme drive(s)) 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:18:49.818 13:12:56 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:18:49.818 13:12:56 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:56.371 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:56.371 13:13:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.371 13:13:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:56.372 [2024-12-06 13:13:02.116662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:56.372 13:13:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.372 [2024-12-06 13:13:02.118640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.372 [2024-12-06 13:13:02.118715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.372 [2024-12-06 13:13:02.118737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.372 [2024-12-06 13:13:02.118772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.372 [2024-12-06 13:13:02.118789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.372 [2024-12-06 13:13:02.118808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.372 [2024-12-06 13:13:02.118841] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.372 [2024-12-06 13:13:02.118878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.372 [2024-12-06 13:13:02.118897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.372 [2024-12-06 13:13:02.118920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.372 [2024-12-06 13:13:02.118936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.372 [2024-12-06 13:13:02.118961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:56.372 [2024-12-06 13:13:02.616706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:56.372 [2024-12-06 13:13:02.618673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.372 [2024-12-06 13:13:02.618734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.372 [2024-12-06 13:13:02.618780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.372 [2024-12-06 13:13:02.618808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.372 [2024-12-06 13:13:02.618829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.372 [2024-12-06 13:13:02.618858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.372 [2024-12-06 13:13:02.618883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.372 [2024-12-06 13:13:02.618899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.372 [2024-12-06 13:13:02.618919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.372 [2024-12-06 13:13:02.618934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:56.372 [2024-12-06 13:13:02.618970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.372 [2024-12-06 13:13:02.618986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:56.372 13:13:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.372 13:13:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:56.372 13:13:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:56.372 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:56.630 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:56.630 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:56.630 13:13:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:08.823 13:13:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:08.823 13:13:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:08.823 13:13:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:08.823 13:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:08.823 13:13:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:08.823 13:13:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:08.823 13:13:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.823 13:13:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:08.823 13:13:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:08.823 13:13:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.823 13:13:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:08.823 [2024-12-06 13:13:15.116824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:19:08.823 [2024-12-06 13:13:15.118742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.823 [2024-12-06 13:13:15.118797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.823 [2024-12-06 13:13:15.118820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.823 [2024-12-06 13:13:15.118863] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.823 [2024-12-06 13:13:15.118889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.823 [2024-12-06 13:13:15.118906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.823 [2024-12-06 13:13:15.118924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.823 [2024-12-06 13:13:15.118940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.823 [2024-12-06 13:13:15.118955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.823 [2024-12-06 13:13:15.118971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.823 [2024-12-06 13:13:15.118985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.823 [2024-12-06 13:13:15.119001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.823 13:13:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:19:08.823 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:09.081 [2024-12-06 13:13:15.516852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:19:09.081 [2024-12-06 13:13:15.518783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:09.081 [2024-12-06 13:13:15.518859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.081 [2024-12-06 13:13:15.518889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.081 [2024-12-06 13:13:15.518916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:09.081 [2024-12-06 13:13:15.518937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.081 [2024-12-06 13:13:15.518952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.081 [2024-12-06 13:13:15.518981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:09.081 [2024-12-06 13:13:15.518996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.081 [2024-12-06 13:13:15.519012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.081 [2024-12-06 13:13:15.519027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:09.081 [2024-12-06 13:13:15.519043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.081 [2024-12-06 13:13:15.519057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:09.338 13:13:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.338 13:13:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:09.338 13:13:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:09.338 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:09.597 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:09.597 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:09.597 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:09.597 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:09.597 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:09.597 13:13:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:09.597 13:13:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:09.597 13:13:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:21.797 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:21.797 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:21.797 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:21.797 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:21.797 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:21.797 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:21.797 13:13:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.797 13:13:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:21.797 13:13:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.797 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:21.797 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:21.798 13:13:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.798 13:13:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:21.798 [2024-12-06 13:13:28.117052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:19:21.798 [2024-12-06 13:13:28.118931] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:21.798 [2024-12-06 13:13:28.118987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.798 [2024-12-06 13:13:28.119009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.798 [2024-12-06 13:13:28.119042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:21.798 [2024-12-06 13:13:28.119058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.798 [2024-12-06 13:13:28.119075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.798 [2024-12-06 13:13:28.119090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:21.798 [2024-12-06 13:13:28.119115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.798 [2024-12-06 13:13:28.119130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.798 [2024-12-06 13:13:28.119147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:21.798 [2024-12-06 13:13:28.119161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.798 [2024-12-06 13:13:28.119177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.798 13:13:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:19:21.798 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:22.056 [2024-12-06 13:13:28.517058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:19:22.056 [2024-12-06 13:13:28.519085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:22.056 [2024-12-06 13:13:28.519135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.056 [2024-12-06 13:13:28.519173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.056 [2024-12-06 13:13:28.519200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:22.056 [2024-12-06 13:13:28.519218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.056 [2024-12-06 13:13:28.519247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.056 [2024-12-06 13:13:28.519266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:22.056 [2024-12-06 13:13:28.519281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.056 [2024-12-06 13:13:28.519298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.056 [2024-12-06 13:13:28.519313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:22.056 [2024-12-06 13:13:28.519332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.056 [2024-12-06 13:13:28.519346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.313 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:19:22.314 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:22.314 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:22.314 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:22.314 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:22.314 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:22.314 13:13:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.314 13:13:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:22.314 13:13:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.314 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:19:22.314 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:22.314 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:22.314 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:22.314 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:22.572 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:22.572 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:22.572 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:22.572 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:22.572 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:22.572 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:22.572 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:22.572 13:13:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:34.802 13:13:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:34.802 13:13:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:34.802 13:13:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:34.802 13:13:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:34.802 13:13:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:34.802 13:13:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:34.802 13:13:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.802 13:13:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.802 13:13:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:34.802 13:13:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.99 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.99 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:19:34.802 13:13:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.99 00:19:34.802 13:13:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.99 2 00:19:34.802 remove_attach_helper took 44.99s to complete (handling 2 nvme drive(s)) 13:13:41 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:19:34.802 13:13:41 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69180 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69180 ']' 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69180 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69180 00:19:34.802 killing process with pid 69180 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69180' 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69180 00:19:34.802 13:13:41 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69180 00:19:36.704 13:13:43 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:37.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:37.529 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.529 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.529 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:37.788 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:37.788 00:19:37.788 real 2m32.787s 00:19:37.788 user 1m52.752s 00:19:37.788 sys 0m19.793s 00:19:37.788 13:13:44 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.788 13:13:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:37.788 ************************************ 00:19:37.788 END TEST sw_hotplug 00:19:37.788 ************************************ 00:19:37.788 13:13:44 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:19:37.788 13:13:44 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:19:37.788 13:13:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:37.788 13:13:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.788 13:13:44 -- common/autotest_common.sh@10 -- # set +x 00:19:37.788 ************************************ 00:19:37.788 START TEST nvme_xnvme 00:19:37.788 ************************************ 00:19:37.788 13:13:44 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:19:37.788 * Looking for test storage... 00:19:37.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:37.788 13:13:44 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:37.788 13:13:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:19:37.788 13:13:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:38.048 13:13:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.049 13:13:44 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.049 --rc genhtml_branch_coverage=1 00:19:38.049 --rc genhtml_function_coverage=1 00:19:38.049 --rc genhtml_legend=1 00:19:38.049 --rc geninfo_all_blocks=1 00:19:38.049 --rc geninfo_unexecuted_blocks=1 00:19:38.049 00:19:38.049 ' 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.049 --rc genhtml_branch_coverage=1 00:19:38.049 --rc genhtml_function_coverage=1 00:19:38.049 --rc genhtml_legend=1 00:19:38.049 --rc geninfo_all_blocks=1 00:19:38.049 --rc geninfo_unexecuted_blocks=1 00:19:38.049 00:19:38.049 ' 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.049 --rc genhtml_branch_coverage=1 00:19:38.049 --rc genhtml_function_coverage=1 00:19:38.049 --rc genhtml_legend=1 00:19:38.049 --rc geninfo_all_blocks=1 00:19:38.049 --rc geninfo_unexecuted_blocks=1 00:19:38.049 00:19:38.049 ' 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.049 --rc genhtml_branch_coverage=1 00:19:38.049 --rc genhtml_function_coverage=1 00:19:38.049 --rc genhtml_legend=1 00:19:38.049 --rc geninfo_all_blocks=1 00:19:38.049 --rc geninfo_unexecuted_blocks=1 00:19:38.049 00:19:38.049 ' 00:19:38.049 13:13:44 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:19:38.049 13:13:44 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:19:38.049 13:13:44 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:19:38.049 13:13:44 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:19:38.050 13:13:44 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:19:38.050 13:13:44 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:19:38.050 #define SPDK_CONFIG_H 00:19:38.050 #define SPDK_CONFIG_AIO_FSDEV 1 00:19:38.050 #define SPDK_CONFIG_APPS 1 00:19:38.050 #define SPDK_CONFIG_ARCH native 00:19:38.050 #define SPDK_CONFIG_ASAN 1 00:19:38.050 #undef SPDK_CONFIG_AVAHI 00:19:38.050 #undef SPDK_CONFIG_CET 00:19:38.050 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:19:38.050 #define SPDK_CONFIG_COVERAGE 1 00:19:38.050 #define SPDK_CONFIG_CROSS_PREFIX 00:19:38.050 #undef SPDK_CONFIG_CRYPTO 00:19:38.050 #undef SPDK_CONFIG_CRYPTO_MLX5 00:19:38.050 #undef SPDK_CONFIG_CUSTOMOCF 00:19:38.050 #undef SPDK_CONFIG_DAOS 00:19:38.050 #define SPDK_CONFIG_DAOS_DIR 00:19:38.050 #define SPDK_CONFIG_DEBUG 1 00:19:38.050 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:19:38.050 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:19:38.050 #define SPDK_CONFIG_DPDK_INC_DIR 00:19:38.050 #define SPDK_CONFIG_DPDK_LIB_DIR 00:19:38.050 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:19:38.050 #undef SPDK_CONFIG_DPDK_UADK 00:19:38.050 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:38.050 #define SPDK_CONFIG_EXAMPLES 1 00:19:38.050 #undef SPDK_CONFIG_FC 00:19:38.050 #define SPDK_CONFIG_FC_PATH 00:19:38.050 #define SPDK_CONFIG_FIO_PLUGIN 1 00:19:38.050 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:19:38.050 #define SPDK_CONFIG_FSDEV 1 00:19:38.050 #undef SPDK_CONFIG_FUSE 00:19:38.050 #undef SPDK_CONFIG_FUZZER 00:19:38.050 #define SPDK_CONFIG_FUZZER_LIB 00:19:38.050 #undef SPDK_CONFIG_GOLANG 00:19:38.050 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:19:38.050 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:19:38.050 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:19:38.050 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:19:38.050 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:19:38.050 #undef SPDK_CONFIG_HAVE_LIBBSD 00:19:38.050 #undef SPDK_CONFIG_HAVE_LZ4 00:19:38.050 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:19:38.050 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:19:38.050 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:19:38.050 #define SPDK_CONFIG_IDXD 1 00:19:38.050 #define SPDK_CONFIG_IDXD_KERNEL 1 00:19:38.050 #undef SPDK_CONFIG_IPSEC_MB 00:19:38.050 #define SPDK_CONFIG_IPSEC_MB_DIR 00:19:38.050 #define SPDK_CONFIG_ISAL 1 00:19:38.050 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:19:38.050 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:19:38.050 #define SPDK_CONFIG_LIBDIR 00:19:38.050 #undef SPDK_CONFIG_LTO 00:19:38.050 #define SPDK_CONFIG_MAX_LCORES 128 00:19:38.050 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:19:38.050 #define SPDK_CONFIG_NVME_CUSE 1 00:19:38.050 #undef SPDK_CONFIG_OCF 00:19:38.050 #define SPDK_CONFIG_OCF_PATH 00:19:38.050 #define SPDK_CONFIG_OPENSSL_PATH 00:19:38.050 #undef SPDK_CONFIG_PGO_CAPTURE 00:19:38.050 #define SPDK_CONFIG_PGO_DIR 00:19:38.050 #undef SPDK_CONFIG_PGO_USE 00:19:38.050 #define SPDK_CONFIG_PREFIX /usr/local 00:19:38.050 #undef SPDK_CONFIG_RAID5F 00:19:38.050 #undef SPDK_CONFIG_RBD 00:19:38.050 #define SPDK_CONFIG_RDMA 1 00:19:38.050 #define SPDK_CONFIG_RDMA_PROV verbs 00:19:38.050 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:19:38.050 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:19:38.050 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:19:38.050 #define SPDK_CONFIG_SHARED 1 00:19:38.050 #undef SPDK_CONFIG_SMA 00:19:38.050 #define SPDK_CONFIG_TESTS 1 00:19:38.050 #undef SPDK_CONFIG_TSAN 00:19:38.050 #define SPDK_CONFIG_UBLK 1 00:19:38.050 #define SPDK_CONFIG_UBSAN 1 00:19:38.050 #undef SPDK_CONFIG_UNIT_TESTS 00:19:38.050 #undef SPDK_CONFIG_URING 00:19:38.050 #define SPDK_CONFIG_URING_PATH 00:19:38.050 #undef SPDK_CONFIG_URING_ZNS 00:19:38.050 #undef SPDK_CONFIG_USDT 00:19:38.050 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:19:38.050 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:19:38.050 #undef SPDK_CONFIG_VFIO_USER 00:19:38.050 #define SPDK_CONFIG_VFIO_USER_DIR 00:19:38.050 #define SPDK_CONFIG_VHOST 1 00:19:38.050 #define SPDK_CONFIG_VIRTIO 1 00:19:38.050 #undef SPDK_CONFIG_VTUNE 00:19:38.050 #define SPDK_CONFIG_VTUNE_DIR 00:19:38.050 #define SPDK_CONFIG_WERROR 1 00:19:38.050 #define SPDK_CONFIG_WPDK_DIR 00:19:38.050 #define SPDK_CONFIG_XNVME 1 00:19:38.050 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:19:38.050 13:13:44 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:19:38.050 13:13:44 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.050 13:13:44 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.050 13:13:44 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.050 13:13:44 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.050 13:13:44 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.050 13:13:44 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.050 13:13:44 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.050 13:13:44 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.050 13:13:44 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:19:38.050 13:13:44 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.050 13:13:44 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@68 -- # uname -s 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:19:38.050 13:13:44 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:19:38.051 13:13:44 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:19:38.051 13:13:44 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:19:38.051 13:13:44 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:19:38.051 13:13:44 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:19:38.051 13:13:44 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:19:38.051 13:13:44 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:19:38.051 13:13:44 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:19:38.051 13:13:44 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:19:38.051 13:13:44 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:38.051 13:13:44 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70547 ]] 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70547 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.2Lxfoy 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.2Lxfoy/tests/xnvme /tmp/spdk.2Lxfoy 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975195648 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593083904 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975195648 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593083904 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:19:38.052 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96453455872 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3249324032 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:19:38.053 * Looking for test storage... 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975195648 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:38.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:19:38.053 13:13:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:38.311 13:13:44 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:19:38.311 13:13:44 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.311 13:13:44 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:38.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.311 --rc genhtml_branch_coverage=1 00:19:38.311 --rc genhtml_function_coverage=1 00:19:38.311 --rc genhtml_legend=1 00:19:38.311 --rc geninfo_all_blocks=1 00:19:38.311 --rc geninfo_unexecuted_blocks=1 00:19:38.311 00:19:38.311 ' 00:19:38.311 13:13:44 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:38.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.311 --rc genhtml_branch_coverage=1 00:19:38.311 --rc genhtml_function_coverage=1 00:19:38.311 --rc genhtml_legend=1 00:19:38.311 --rc geninfo_all_blocks=1 00:19:38.311 --rc geninfo_unexecuted_blocks=1 00:19:38.311 00:19:38.311 ' 00:19:38.311 13:13:44 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:38.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.311 --rc genhtml_branch_coverage=1 00:19:38.311 --rc genhtml_function_coverage=1 00:19:38.311 --rc genhtml_legend=1 00:19:38.311 --rc geninfo_all_blocks=1 00:19:38.311 --rc geninfo_unexecuted_blocks=1 00:19:38.311 00:19:38.311 ' 00:19:38.311 13:13:44 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:38.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.311 --rc genhtml_branch_coverage=1 00:19:38.311 --rc genhtml_function_coverage=1 00:19:38.311 --rc genhtml_legend=1 00:19:38.311 --rc geninfo_all_blocks=1 00:19:38.311 --rc geninfo_unexecuted_blocks=1 00:19:38.311 00:19:38.311 ' 00:19:38.311 13:13:44 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.311 13:13:44 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.312 13:13:44 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.312 13:13:44 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.312 13:13:44 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.312 13:13:44 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:19:38.312 13:13:44 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:19:38.312 13:13:44 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:38.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:38.826 Waiting for block devices as requested 00:19:38.826 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:38.826 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:39.082 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:19:39.082 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:44.368 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:44.368 13:13:50 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:19:44.626 13:13:50 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:19:44.626 13:13:50 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:19:44.626 13:13:51 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:19:44.626 13:13:51 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:19:44.626 13:13:51 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:19:44.626 13:13:51 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:19:44.626 13:13:51 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:19:44.884 No valid GPT data, bailing 00:19:44.884 13:13:51 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:44.884 13:13:51 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:19:44.884 13:13:51 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:19:44.884 13:13:51 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:44.885 13:13:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:44.885 13:13:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:44.885 13:13:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.885 13:13:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:44.885 ************************************ 00:19:44.885 START TEST xnvme_rpc 00:19:44.885 ************************************ 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70942 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70942 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70942 ']' 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.885 13:13:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.885 [2024-12-06 13:13:51.298322] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:19:44.885 [2024-12-06 13:13:51.298538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70942 ] 00:19:45.143 [2024-12-06 13:13:51.478451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.143 [2024-12-06 13:13:51.602989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.077 xnvme_bdev 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:46.077 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70942 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70942 ']' 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70942 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70942 00:19:46.336 killing process with pid 70942 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70942' 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70942 00:19:46.336 13:13:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70942 00:19:48.238 00:19:48.238 real 0m3.537s 00:19:48.238 user 0m3.905s 00:19:48.238 sys 0m0.420s 00:19:48.238 13:13:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.238 ************************************ 00:19:48.238 END TEST xnvme_rpc 00:19:48.238 ************************************ 00:19:48.238 13:13:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:48.496 13:13:54 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:48.496 13:13:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:48.496 13:13:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.496 13:13:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:48.496 ************************************ 00:19:48.496 START TEST xnvme_bdevperf 00:19:48.496 ************************************ 00:19:48.496 13:13:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:48.496 13:13:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:48.496 13:13:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:48.496 13:13:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:48.496 13:13:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:48.496 13:13:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:48.496 13:13:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:48.496 13:13:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:48.496 { 00:19:48.496 "subsystems": [ 00:19:48.496 { 00:19:48.496 "subsystem": "bdev", 00:19:48.496 "config": [ 00:19:48.496 { 00:19:48.496 "params": { 00:19:48.496 "io_mechanism": "libaio", 00:19:48.497 "conserve_cpu": false, 00:19:48.497 "filename": "/dev/nvme0n1", 00:19:48.497 "name": "xnvme_bdev" 00:19:48.497 }, 00:19:48.497 "method": "bdev_xnvme_create" 00:19:48.497 }, 00:19:48.497 { 00:19:48.497 "method": "bdev_wait_for_examine" 00:19:48.497 } 00:19:48.497 ] 00:19:48.497 } 00:19:48.497 ] 00:19:48.497 } 00:19:48.497 [2024-12-06 13:13:54.867426] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:19:48.497 [2024-12-06 13:13:54.867562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71016 ] 00:19:48.762 [2024-12-06 13:13:55.041297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.762 [2024-12-06 13:13:55.143587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.019 Running I/O for 5 seconds... 00:19:51.338 27528.00 IOPS, 107.53 MiB/s [2024-12-06T13:13:58.488Z] 27266.00 IOPS, 106.51 MiB/s [2024-12-06T13:13:59.864Z] 27232.33 IOPS, 106.38 MiB/s [2024-12-06T13:14:00.801Z] 26925.00 IOPS, 105.18 MiB/s 00:19:54.273 Latency(us) 00:19:54.273 [2024-12-06T13:14:00.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.273 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:54.273 xnvme_bdev : 5.00 27104.00 105.87 0.00 0.00 2356.21 240.17 4736.47 00:19:54.273 [2024-12-06T13:14:00.801Z] =================================================================================================================== 00:19:54.273 [2024-12-06T13:14:00.801Z] Total : 27104.00 105.87 0.00 0.00 2356.21 240.17 4736.47 00:19:55.211 13:14:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:55.211 13:14:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:55.211 13:14:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:55.211 13:14:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:55.211 13:14:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.211 { 00:19:55.211 "subsystems": [ 00:19:55.211 { 00:19:55.211 "subsystem": "bdev", 00:19:55.211 "config": [ 00:19:55.211 { 00:19:55.211 "params": { 00:19:55.211 "io_mechanism": "libaio", 00:19:55.211 "conserve_cpu": false, 00:19:55.211 "filename": "/dev/nvme0n1", 00:19:55.211 "name": "xnvme_bdev" 00:19:55.211 }, 00:19:55.211 "method": "bdev_xnvme_create" 00:19:55.211 }, 00:19:55.211 { 00:19:55.211 "method": "bdev_wait_for_examine" 00:19:55.211 } 00:19:55.211 ] 00:19:55.211 } 00:19:55.211 ] 00:19:55.211 } 00:19:55.211 [2024-12-06 13:14:01.563939] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:19:55.211 [2024-12-06 13:14:01.564448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71097 ] 00:19:55.470 [2024-12-06 13:14:01.746958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.470 [2024-12-06 13:14:01.849554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.727 Running I/O for 5 seconds... 00:19:58.036 26883.00 IOPS, 105.01 MiB/s [2024-12-06T13:14:05.498Z] 26549.50 IOPS, 103.71 MiB/s [2024-12-06T13:14:06.475Z] 25612.67 IOPS, 100.05 MiB/s [2024-12-06T13:14:07.410Z] 25582.25 IOPS, 99.93 MiB/s [2024-12-06T13:14:07.410Z] 25228.60 IOPS, 98.55 MiB/s 00:20:00.882 Latency(us) 00:20:00.882 [2024-12-06T13:14:07.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.882 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:00.882 xnvme_bdev : 5.01 25196.15 98.42 0.00 0.00 2533.75 614.40 8102.63 00:20:00.882 [2024-12-06T13:14:07.410Z] =================================================================================================================== 00:20:00.882 [2024-12-06T13:14:07.410Z] Total : 25196.15 98.42 0.00 0.00 2533.75 614.40 8102.63 00:20:01.816 00:20:01.816 real 0m13.478s 00:20:01.816 user 0m5.072s 00:20:01.816 sys 0m5.950s 00:20:01.816 13:14:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.816 13:14:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:01.816 ************************************ 00:20:01.816 END TEST xnvme_bdevperf 00:20:01.816 ************************************ 00:20:01.816 13:14:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:01.816 13:14:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:01.816 13:14:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.816 13:14:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:01.816 ************************************ 00:20:01.816 START TEST xnvme_fio_plugin 00:20:01.816 ************************************ 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:01.816 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:02.075 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:02.075 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:02.075 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:02.075 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:02.075 13:14:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:02.075 { 00:20:02.075 "subsystems": [ 00:20:02.075 { 00:20:02.075 "subsystem": "bdev", 00:20:02.075 "config": [ 00:20:02.075 { 00:20:02.075 "params": { 00:20:02.075 "io_mechanism": "libaio", 00:20:02.075 "conserve_cpu": false, 00:20:02.075 "filename": "/dev/nvme0n1", 00:20:02.075 "name": "xnvme_bdev" 00:20:02.075 }, 00:20:02.075 "method": "bdev_xnvme_create" 00:20:02.075 }, 00:20:02.075 { 00:20:02.075 "method": "bdev_wait_for_examine" 00:20:02.075 } 00:20:02.075 ] 00:20:02.075 } 00:20:02.075 ] 00:20:02.075 } 00:20:02.075 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:02.075 fio-3.35 00:20:02.075 Starting 1 thread 00:20:08.693 00:20:08.693 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71216: Fri Dec 6 13:14:14 2024 00:20:08.693 read: IOPS=23.3k, BW=91.2MiB/s (95.6MB/s)(456MiB/5001msec) 00:20:08.693 slat (usec): min=5, max=906, avg=38.57, stdev=28.01 00:20:08.693 clat (usec): min=41, max=6778, avg=1499.61, stdev=828.66 00:20:08.693 lat (usec): min=174, max=6891, avg=1538.17, stdev=831.50 00:20:08.693 clat percentiles (usec): 00:20:08.693 | 1.00th=[ 245], 5.00th=[ 363], 10.00th=[ 490], 20.00th=[ 725], 00:20:08.693 | 30.00th=[ 947], 40.00th=[ 1156], 50.00th=[ 1385], 60.00th=[ 1631], 00:20:08.693 | 70.00th=[ 1909], 80.00th=[ 2245], 90.00th=[ 2638], 95.00th=[ 2933], 00:20:08.693 | 99.00th=[ 3752], 99.50th=[ 4146], 99.90th=[ 4686], 99.95th=[ 4948], 00:20:08.693 | 99.99th=[ 5669] 00:20:08.693 bw ( KiB/s): min=83728, max=113736, per=100.00%, avg=94238.78, stdev=10544.04, samples=9 00:20:08.693 iops : min=20932, max=28434, avg=23559.67, stdev=2635.97, samples=9 00:20:08.693 lat (usec) : 50=0.01%, 250=1.12%, 500=9.34%, 750=10.78%, 1000=11.35% 00:20:08.693 lat (msec) : 2=40.01%, 4=26.75%, 10=0.64% 00:20:08.693 cpu : usr=23.28%, sys=54.06%, ctx=94, majf=0, minf=761 00:20:08.693 IO depths : 1=0.1%, 2=1.7%, 4=5.5%, 8=12.1%, 16=25.7%, 32=53.2%, >=64=1.7% 00:20:08.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.693 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:08.693 issued rwts: total=116754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:08.693 00:20:08.693 Run status group 0 (all jobs): 00:20:08.693 READ: bw=91.2MiB/s (95.6MB/s), 91.2MiB/s-91.2MiB/s (95.6MB/s-95.6MB/s), io=456MiB (478MB), run=5001-5001msec 00:20:09.260 ----------------------------------------------------- 00:20:09.260 Suppressions used: 00:20:09.260 count bytes template 00:20:09.260 1 11 /usr/src/fio/parse.c 00:20:09.260 1 8 libtcmalloc_minimal.so 00:20:09.260 1 904 libcrypto.so 00:20:09.260 ----------------------------------------------------- 00:20:09.260 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:09.260 13:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:09.260 { 00:20:09.260 "subsystems": [ 00:20:09.260 { 00:20:09.260 "subsystem": "bdev", 00:20:09.260 "config": [ 00:20:09.260 { 00:20:09.260 "params": { 00:20:09.260 "io_mechanism": "libaio", 00:20:09.260 "conserve_cpu": false, 00:20:09.260 "filename": "/dev/nvme0n1", 00:20:09.260 "name": "xnvme_bdev" 00:20:09.260 }, 00:20:09.260 "method": "bdev_xnvme_create" 00:20:09.260 }, 00:20:09.260 { 00:20:09.260 "method": "bdev_wait_for_examine" 00:20:09.260 } 00:20:09.260 ] 00:20:09.260 } 00:20:09.260 ] 00:20:09.260 } 00:20:09.519 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:09.519 fio-3.35 00:20:09.519 Starting 1 thread 00:20:16.109 00:20:16.109 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71308: Fri Dec 6 13:14:21 2024 00:20:16.109 write: IOPS=23.8k, BW=92.9MiB/s (97.4MB/s)(464MiB/5001msec); 0 zone resets 00:20:16.109 slat (usec): min=5, max=2402, avg=37.67, stdev=30.13 00:20:16.109 clat (usec): min=99, max=5758, avg=1481.72, stdev=820.24 00:20:16.109 lat (usec): min=105, max=5784, avg=1519.39, stdev=823.16 00:20:16.109 clat percentiles (usec): 00:20:16.109 | 1.00th=[ 251], 5.00th=[ 367], 10.00th=[ 490], 20.00th=[ 717], 00:20:16.109 | 30.00th=[ 930], 40.00th=[ 1139], 50.00th=[ 1352], 60.00th=[ 1598], 00:20:16.109 | 70.00th=[ 1893], 80.00th=[ 2245], 90.00th=[ 2638], 95.00th=[ 2900], 00:20:16.109 | 99.00th=[ 3654], 99.50th=[ 3949], 99.90th=[ 4490], 99.95th=[ 4686], 00:20:16.109 | 99.99th=[ 5211] 00:20:16.109 bw ( KiB/s): min=85944, max=109560, per=99.33%, avg=94456.89, stdev=7680.34, samples=9 00:20:16.109 iops : min=21486, max=27390, avg=23614.22, stdev=1920.09, samples=9 00:20:16.109 lat (usec) : 100=0.01%, 250=0.98%, 500=9.61%, 750=11.04%, 1000=11.95% 00:20:16.109 lat (msec) : 2=39.52%, 4=26.47%, 10=0.44% 00:20:16.109 cpu : usr=23.94%, sys=53.48%, ctx=201, majf=0, minf=765 00:20:16.109 IO depths : 1=0.1%, 2=1.6%, 4=5.4%, 8=11.9%, 16=25.8%, 32=53.5%, >=64=1.7% 00:20:16.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.109 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:16.109 issued rwts: total=0,118892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:16.109 00:20:16.109 Run status group 0 (all jobs): 00:20:16.109 WRITE: bw=92.9MiB/s (97.4MB/s), 92.9MiB/s-92.9MiB/s (97.4MB/s-97.4MB/s), io=464MiB (487MB), run=5001-5001msec 00:20:16.675 ----------------------------------------------------- 00:20:16.675 Suppressions used: 00:20:16.675 count bytes template 00:20:16.675 1 11 /usr/src/fio/parse.c 00:20:16.675 1 8 libtcmalloc_minimal.so 00:20:16.675 1 904 libcrypto.so 00:20:16.675 ----------------------------------------------------- 00:20:16.675 00:20:16.675 00:20:16.675 real 0m14.753s 00:20:16.676 user 0m6.113s 00:20:16.676 sys 0m5.994s 00:20:16.676 ************************************ 00:20:16.676 END TEST xnvme_fio_plugin 00:20:16.676 13:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.676 13:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:16.676 ************************************ 00:20:16.676 13:14:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:16.676 13:14:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:16.676 13:14:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:16.676 13:14:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:16.676 13:14:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:16.676 13:14:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.676 13:14:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:16.676 ************************************ 00:20:16.676 START TEST xnvme_rpc 00:20:16.676 ************************************ 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71400 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71400 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71400 ']' 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.676 13:14:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:16.934 [2024-12-06 13:14:23.239421] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:20:16.934 [2024-12-06 13:14:23.240520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71400 ] 00:20:16.934 [2024-12-06 13:14:23.425659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.192 [2024-12-06 13:14:23.552613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.127 xnvme_bdev 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.127 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71400 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71400 ']' 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71400 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71400 00:20:18.128 killing process with pid 71400 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71400' 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71400 00:20:18.128 13:14:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71400 00:20:20.663 00:20:20.663 real 0m3.577s 00:20:20.663 user 0m3.920s 00:20:20.663 sys 0m0.441s 00:20:20.663 13:14:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.663 13:14:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:20.663 ************************************ 00:20:20.663 END TEST xnvme_rpc 00:20:20.663 ************************************ 00:20:20.663 13:14:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:20.663 13:14:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:20.663 13:14:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.663 13:14:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:20.663 ************************************ 00:20:20.663 START TEST xnvme_bdevperf 00:20:20.663 ************************************ 00:20:20.663 13:14:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:20.663 13:14:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:20.663 13:14:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:20:20.663 13:14:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:20.663 13:14:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:20.663 13:14:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:20.663 13:14:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:20.663 13:14:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:20.664 { 00:20:20.664 "subsystems": [ 00:20:20.664 { 00:20:20.664 "subsystem": "bdev", 00:20:20.664 "config": [ 00:20:20.664 { 00:20:20.664 "params": { 00:20:20.664 "io_mechanism": "libaio", 00:20:20.664 "conserve_cpu": true, 00:20:20.664 "filename": "/dev/nvme0n1", 00:20:20.664 "name": "xnvme_bdev" 00:20:20.664 }, 00:20:20.664 "method": "bdev_xnvme_create" 00:20:20.664 }, 00:20:20.664 { 00:20:20.664 "method": "bdev_wait_for_examine" 00:20:20.664 } 00:20:20.664 ] 00:20:20.664 } 00:20:20.664 ] 00:20:20.664 } 00:20:20.664 [2024-12-06 13:14:26.835029] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:20:20.664 [2024-12-06 13:14:26.835168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71474 ] 00:20:20.664 [2024-12-06 13:14:27.014661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.664 [2024-12-06 13:14:27.140424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.232 Running I/O for 5 seconds... 00:20:23.148 22705.00 IOPS, 88.69 MiB/s [2024-12-06T13:14:30.611Z] 22256.50 IOPS, 86.94 MiB/s [2024-12-06T13:14:31.546Z] 22871.67 IOPS, 89.34 MiB/s [2024-12-06T13:14:32.922Z] 22803.00 IOPS, 89.07 MiB/s 00:20:26.394 Latency(us) 00:20:26.394 [2024-12-06T13:14:32.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.394 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:26.394 xnvme_bdev : 5.00 22633.65 88.41 0.00 0.00 2820.70 240.17 5868.45 00:20:26.394 [2024-12-06T13:14:32.922Z] =================================================================================================================== 00:20:26.394 [2024-12-06T13:14:32.922Z] Total : 22633.65 88.41 0.00 0.00 2820.70 240.17 5868.45 00:20:27.331 13:14:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:27.331 13:14:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:27.331 13:14:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:27.331 13:14:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:27.331 13:14:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:27.331 { 00:20:27.331 "subsystems": [ 00:20:27.331 { 00:20:27.331 "subsystem": "bdev", 00:20:27.331 "config": [ 00:20:27.331 { 00:20:27.331 "params": { 00:20:27.331 "io_mechanism": "libaio", 00:20:27.331 "conserve_cpu": true, 00:20:27.331 "filename": "/dev/nvme0n1", 00:20:27.331 "name": "xnvme_bdev" 00:20:27.331 }, 00:20:27.331 "method": "bdev_xnvme_create" 00:20:27.331 }, 00:20:27.331 { 00:20:27.331 "method": "bdev_wait_for_examine" 00:20:27.331 } 00:20:27.331 ] 00:20:27.331 } 00:20:27.331 ] 00:20:27.331 } 00:20:27.331 [2024-12-06 13:14:33.644346] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:20:27.331 [2024-12-06 13:14:33.644512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71555 ] 00:20:27.331 [2024-12-06 13:14:33.828447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.589 [2024-12-06 13:14:33.932067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.848 Running I/O for 5 seconds... 00:20:30.155 22304.00 IOPS, 87.12 MiB/s [2024-12-06T13:14:37.615Z] 22569.50 IOPS, 88.16 MiB/s [2024-12-06T13:14:38.279Z] 22232.67 IOPS, 86.85 MiB/s [2024-12-06T13:14:39.653Z] 23212.50 IOPS, 90.67 MiB/s 00:20:33.125 Latency(us) 00:20:33.125 [2024-12-06T13:14:39.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.125 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:33.125 xnvme_bdev : 5.00 23168.20 90.50 0.00 0.00 2754.86 301.61 5838.66 00:20:33.125 [2024-12-06T13:14:39.653Z] =================================================================================================================== 00:20:33.125 [2024-12-06T13:14:39.653Z] Total : 23168.20 90.50 0.00 0.00 2754.86 301.61 5838.66 00:20:34.061 00:20:34.061 real 0m13.630s 00:20:34.061 user 0m5.296s 00:20:34.061 sys 0m5.786s 00:20:34.061 13:14:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.061 13:14:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:34.062 ************************************ 00:20:34.062 END TEST xnvme_bdevperf 00:20:34.062 ************************************ 00:20:34.062 13:14:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:34.062 13:14:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:34.062 13:14:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.062 13:14:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:34.062 ************************************ 00:20:34.062 START TEST xnvme_fio_plugin 00:20:34.062 ************************************ 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:34.062 13:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:34.062 { 00:20:34.062 "subsystems": [ 00:20:34.062 { 00:20:34.062 "subsystem": "bdev", 00:20:34.062 "config": [ 00:20:34.062 { 00:20:34.062 "params": { 00:20:34.062 "io_mechanism": "libaio", 00:20:34.062 "conserve_cpu": true, 00:20:34.062 "filename": "/dev/nvme0n1", 00:20:34.062 "name": "xnvme_bdev" 00:20:34.062 }, 00:20:34.062 "method": "bdev_xnvme_create" 00:20:34.062 }, 00:20:34.062 { 00:20:34.062 "method": "bdev_wait_for_examine" 00:20:34.062 } 00:20:34.062 ] 00:20:34.062 } 00:20:34.062 ] 00:20:34.062 } 00:20:34.321 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:34.321 fio-3.35 00:20:34.321 Starting 1 thread 00:20:40.915 00:20:40.915 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71680: Fri Dec 6 13:14:46 2024 00:20:40.915 read: IOPS=23.7k, BW=92.6MiB/s (97.1MB/s)(463MiB/5001msec) 00:20:40.915 slat (usec): min=5, max=851, avg=37.88, stdev=27.81 00:20:40.915 clat (usec): min=118, max=5959, avg=1475.31, stdev=812.38 00:20:40.915 lat (usec): min=184, max=6022, avg=1513.19, stdev=815.03 00:20:40.915 clat percentiles (usec): 00:20:40.915 | 1.00th=[ 243], 5.00th=[ 355], 10.00th=[ 478], 20.00th=[ 709], 00:20:40.915 | 30.00th=[ 930], 40.00th=[ 1139], 50.00th=[ 1369], 60.00th=[ 1614], 00:20:40.915 | 70.00th=[ 1893], 80.00th=[ 2212], 90.00th=[ 2606], 95.00th=[ 2868], 00:20:40.915 | 99.00th=[ 3654], 99.50th=[ 3982], 99.90th=[ 4555], 99.95th=[ 4752], 00:20:40.915 | 99.99th=[ 5473] 00:20:40.915 bw ( KiB/s): min=82064, max=122328, per=98.71%, avg=93576.89, stdev=12362.57, samples=9 00:20:40.915 iops : min=20516, max=30582, avg=23394.22, stdev=3090.64, samples=9 00:20:40.915 lat (usec) : 250=1.19%, 500=9.88%, 750=10.95%, 1000=11.24% 00:20:40.915 lat (msec) : 2=40.43%, 4=25.83%, 10=0.48% 00:20:40.915 cpu : usr=23.30%, sys=53.36%, ctx=71, majf=0, minf=690 00:20:40.915 IO depths : 1=0.1%, 2=1.7%, 4=5.5%, 8=12.3%, 16=25.8%, 32=52.8%, >=64=1.7% 00:20:40.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.915 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:40.915 issued rwts: total=118520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.915 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:40.915 00:20:40.915 Run status group 0 (all jobs): 00:20:40.915 READ: bw=92.6MiB/s (97.1MB/s), 92.6MiB/s-92.6MiB/s (97.1MB/s-97.1MB/s), io=463MiB (485MB), run=5001-5001msec 00:20:41.483 ----------------------------------------------------- 00:20:41.483 Suppressions used: 00:20:41.483 count bytes template 00:20:41.483 1 11 /usr/src/fio/parse.c 00:20:41.483 1 8 libtcmalloc_minimal.so 00:20:41.483 1 904 libcrypto.so 00:20:41.483 ----------------------------------------------------- 00:20:41.483 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:41.483 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.484 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:41.484 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.484 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:41.484 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:41.484 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:41.484 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:41.484 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:41.484 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:41.484 13:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:41.484 { 00:20:41.484 "subsystems": [ 00:20:41.484 { 00:20:41.484 "subsystem": "bdev", 00:20:41.484 "config": [ 00:20:41.484 { 00:20:41.484 "params": { 00:20:41.484 "io_mechanism": "libaio", 00:20:41.484 "conserve_cpu": true, 00:20:41.484 "filename": "/dev/nvme0n1", 00:20:41.484 "name": "xnvme_bdev" 00:20:41.484 }, 00:20:41.484 "method": "bdev_xnvme_create" 00:20:41.484 }, 00:20:41.484 { 00:20:41.484 "method": "bdev_wait_for_examine" 00:20:41.484 } 00:20:41.484 ] 00:20:41.484 } 00:20:41.484 ] 00:20:41.484 } 00:20:41.742 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:41.742 fio-3.35 00:20:41.742 Starting 1 thread 00:20:48.306 00:20:48.306 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71772: Fri Dec 6 13:14:53 2024 00:20:48.306 write: IOPS=23.6k, BW=92.4MiB/s (96.9MB/s)(462MiB/5001msec); 0 zone resets 00:20:48.306 slat (usec): min=5, max=728, avg=37.78, stdev=30.97 00:20:48.306 clat (usec): min=120, max=6846, avg=1499.05, stdev=812.93 00:20:48.306 lat (usec): min=186, max=6945, avg=1536.83, stdev=815.08 00:20:48.306 clat percentiles (usec): 00:20:48.306 | 1.00th=[ 258], 5.00th=[ 379], 10.00th=[ 506], 20.00th=[ 742], 00:20:48.306 | 30.00th=[ 955], 40.00th=[ 1156], 50.00th=[ 1385], 60.00th=[ 1631], 00:20:48.306 | 70.00th=[ 1909], 80.00th=[ 2245], 90.00th=[ 2638], 95.00th=[ 2900], 00:20:48.306 | 99.00th=[ 3621], 99.50th=[ 3916], 99.90th=[ 4555], 99.95th=[ 4883], 00:20:48.306 | 99.99th=[ 5932] 00:20:48.306 bw ( KiB/s): min=89576, max=113192, per=99.89%, avg=94487.11, stdev=7826.05, samples=9 00:20:48.306 iops : min=22394, max=28298, avg=23621.78, stdev=1956.51, samples=9 00:20:48.306 lat (usec) : 250=0.83%, 500=8.95%, 750=10.79%, 1000=11.73% 00:20:48.306 lat (msec) : 2=40.55%, 4=26.74%, 10=0.41% 00:20:48.306 cpu : usr=24.38%, sys=52.66%, ctx=118, majf=0, minf=765 00:20:48.306 IO depths : 1=0.1%, 2=1.5%, 4=5.3%, 8=12.0%, 16=25.7%, 32=53.7%, >=64=1.7% 00:20:48.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.306 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:48.306 issued rwts: total=0,118258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.306 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:48.306 00:20:48.306 Run status group 0 (all jobs): 00:20:48.306 WRITE: bw=92.4MiB/s (96.9MB/s), 92.4MiB/s-92.4MiB/s (96.9MB/s-96.9MB/s), io=462MiB (484MB), run=5001-5001msec 00:20:48.871 ----------------------------------------------------- 00:20:48.871 Suppressions used: 00:20:48.871 count bytes template 00:20:48.871 1 11 /usr/src/fio/parse.c 00:20:48.871 1 8 libtcmalloc_minimal.so 00:20:48.871 1 904 libcrypto.so 00:20:48.871 ----------------------------------------------------- 00:20:48.871 00:20:48.871 00:20:48.871 real 0m14.775s 00:20:48.871 user 0m6.164s 00:20:48.871 sys 0m5.935s 00:20:48.871 13:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.871 ************************************ 00:20:48.871 END TEST xnvme_fio_plugin 00:20:48.871 ************************************ 00:20:48.871 13:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:48.871 13:14:55 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:20:48.871 13:14:55 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:20:48.871 13:14:55 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:20:48.871 13:14:55 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:20:48.871 13:14:55 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:20:48.871 13:14:55 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:48.871 13:14:55 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:20:48.871 13:14:55 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:20:48.871 13:14:55 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:48.871 13:14:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:48.871 13:14:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.871 13:14:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:48.871 ************************************ 00:20:48.871 START TEST xnvme_rpc 00:20:48.871 ************************************ 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71858 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71858 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71858 ']' 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.871 13:14:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.871 [2024-12-06 13:14:55.351217] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:20:48.871 [2024-12-06 13:14:55.351594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71858 ] 00:20:49.128 [2024-12-06 13:14:55.527103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.128 [2024-12-06 13:14:55.629450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.059 xnvme_bdev 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.059 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71858 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71858 ']' 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71858 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71858 00:20:50.318 killing process with pid 71858 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71858' 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71858 00:20:50.318 13:14:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71858 00:20:52.852 ************************************ 00:20:52.852 END TEST xnvme_rpc 00:20:52.852 ************************************ 00:20:52.852 00:20:52.852 real 0m3.572s 00:20:52.852 user 0m3.918s 00:20:52.852 sys 0m0.427s 00:20:52.852 13:14:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.852 13:14:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.852 13:14:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:52.852 13:14:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.852 13:14:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.852 13:14:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:52.852 ************************************ 00:20:52.852 START TEST xnvme_bdevperf 00:20:52.852 ************************************ 00:20:52.852 13:14:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:52.852 13:14:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:52.852 13:14:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:20:52.852 13:14:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:52.852 13:14:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:52.852 13:14:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:52.852 13:14:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:52.852 13:14:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:52.852 { 00:20:52.852 "subsystems": [ 00:20:52.852 { 00:20:52.852 "subsystem": "bdev", 00:20:52.852 "config": [ 00:20:52.852 { 00:20:52.852 "params": { 00:20:52.852 "io_mechanism": "io_uring", 00:20:52.852 "conserve_cpu": false, 00:20:52.852 "filename": "/dev/nvme0n1", 00:20:52.852 "name": "xnvme_bdev" 00:20:52.852 }, 00:20:52.852 "method": "bdev_xnvme_create" 00:20:52.852 }, 00:20:52.852 { 00:20:52.852 "method": "bdev_wait_for_examine" 00:20:52.852 } 00:20:52.852 ] 00:20:52.852 } 00:20:52.852 ] 00:20:52.852 } 00:20:52.852 [2024-12-06 13:14:58.992149] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:20:52.852 [2024-12-06 13:14:58.992310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71938 ] 00:20:52.852 [2024-12-06 13:14:59.172845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.852 [2024-12-06 13:14:59.280404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.111 Running I/O for 5 seconds... 00:20:55.426 46757.00 IOPS, 182.64 MiB/s [2024-12-06T13:15:02.889Z] 45778.50 IOPS, 178.82 MiB/s [2024-12-06T13:15:03.824Z] 45608.33 IOPS, 178.16 MiB/s [2024-12-06T13:15:04.760Z] 45287.00 IOPS, 176.90 MiB/s 00:20:58.232 Latency(us) 00:20:58.232 [2024-12-06T13:15:04.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.232 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:58.232 xnvme_bdev : 5.00 45797.21 178.90 0.00 0.00 1392.90 476.63 8400.52 00:20:58.232 [2024-12-06T13:15:04.760Z] =================================================================================================================== 00:20:58.232 [2024-12-06T13:15:04.760Z] Total : 45797.21 178.90 0.00 0.00 1392.90 476.63 8400.52 00:20:59.166 13:15:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:59.166 13:15:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:59.166 13:15:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:59.166 13:15:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:59.166 13:15:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:59.425 { 00:20:59.425 "subsystems": [ 00:20:59.425 { 00:20:59.425 "subsystem": "bdev", 00:20:59.425 "config": [ 00:20:59.425 { 00:20:59.425 "params": { 00:20:59.425 "io_mechanism": "io_uring", 00:20:59.425 "conserve_cpu": false, 00:20:59.425 "filename": "/dev/nvme0n1", 00:20:59.425 "name": "xnvme_bdev" 00:20:59.425 }, 00:20:59.425 "method": "bdev_xnvme_create" 00:20:59.425 }, 00:20:59.425 { 00:20:59.425 "method": "bdev_wait_for_examine" 00:20:59.425 } 00:20:59.425 ] 00:20:59.425 } 00:20:59.425 ] 00:20:59.425 } 00:20:59.425 [2024-12-06 13:15:05.733058] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:20:59.425 [2024-12-06 13:15:05.733216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72014 ] 00:20:59.425 [2024-12-06 13:15:05.906840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.704 [2024-12-06 13:15:06.006605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.987 Running I/O for 5 seconds... 00:21:01.861 43776.00 IOPS, 171.00 MiB/s [2024-12-06T13:15:09.325Z] 43136.00 IOPS, 168.50 MiB/s [2024-12-06T13:15:10.698Z] 43392.00 IOPS, 169.50 MiB/s [2024-12-06T13:15:11.633Z] 43184.00 IOPS, 168.69 MiB/s 00:21:05.105 Latency(us) 00:21:05.105 [2024-12-06T13:15:11.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.105 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:05.105 xnvme_bdev : 5.00 43009.97 168.01 0.00 0.00 1482.80 309.06 24546.21 00:21:05.105 [2024-12-06T13:15:11.633Z] =================================================================================================================== 00:21:05.105 [2024-12-06T13:15:11.633Z] Total : 43009.97 168.01 0.00 0.00 1482.80 309.06 24546.21 00:21:06.038 ************************************ 00:21:06.038 END TEST xnvme_bdevperf 00:21:06.038 ************************************ 00:21:06.038 00:21:06.038 real 0m13.493s 00:21:06.038 user 0m7.107s 00:21:06.038 sys 0m6.168s 00:21:06.038 13:15:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.039 13:15:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:06.039 13:15:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:06.039 13:15:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:06.039 13:15:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.039 13:15:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:06.039 ************************************ 00:21:06.039 START TEST xnvme_fio_plugin 00:21:06.039 ************************************ 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:06.039 13:15:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:06.039 { 00:21:06.039 "subsystems": [ 00:21:06.039 { 00:21:06.039 "subsystem": "bdev", 00:21:06.039 "config": [ 00:21:06.039 { 00:21:06.039 "params": { 00:21:06.039 "io_mechanism": "io_uring", 00:21:06.039 "conserve_cpu": false, 00:21:06.039 "filename": "/dev/nvme0n1", 00:21:06.039 "name": "xnvme_bdev" 00:21:06.039 }, 00:21:06.039 "method": "bdev_xnvme_create" 00:21:06.039 }, 00:21:06.039 { 00:21:06.039 "method": "bdev_wait_for_examine" 00:21:06.039 } 00:21:06.039 ] 00:21:06.039 } 00:21:06.039 ] 00:21:06.039 } 00:21:06.296 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:06.296 fio-3.35 00:21:06.296 Starting 1 thread 00:21:12.851 00:21:12.851 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72138: Fri Dec 6 13:15:18 2024 00:21:12.851 read: IOPS=45.4k, BW=177MiB/s (186MB/s)(887MiB/5001msec) 00:21:12.851 slat (usec): min=2, max=117, avg= 4.50, stdev= 2.04 00:21:12.851 clat (usec): min=227, max=7400, avg=1230.09, stdev=223.93 00:21:12.851 lat (usec): min=236, max=7406, avg=1234.59, stdev=224.48 00:21:12.851 clat percentiles (usec): 00:21:12.851 | 1.00th=[ 914], 5.00th=[ 979], 10.00th=[ 1020], 20.00th=[ 1074], 00:21:12.851 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1237], 00:21:12.851 | 70.00th=[ 1287], 80.00th=[ 1369], 90.00th=[ 1483], 95.00th=[ 1614], 00:21:12.851 | 99.00th=[ 1844], 99.50th=[ 2024], 99.90th=[ 3392], 99.95th=[ 3785], 00:21:12.851 | 99.99th=[ 5014] 00:21:12.851 bw ( KiB/s): min=160680, max=193536, per=99.89%, avg=181412.44, stdev=9347.12, samples=9 00:21:12.851 iops : min=40170, max=48384, avg=45353.11, stdev=2336.78, samples=9 00:21:12.851 lat (usec) : 250=0.01%, 500=0.04%, 750=0.04%, 1000=7.48% 00:21:12.851 lat (msec) : 2=91.92%, 4=0.48%, 10=0.04% 00:21:12.851 cpu : usr=38.06%, sys=60.80%, ctx=26, majf=0, minf=762 00:21:12.851 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.6% 00:21:12.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.851 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:21:12.851 issued rwts: total=227065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.851 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.851 00:21:12.851 Run status group 0 (all jobs): 00:21:12.851 READ: bw=177MiB/s (186MB/s), 177MiB/s-177MiB/s (186MB/s-186MB/s), io=887MiB (930MB), run=5001-5001msec 00:21:13.437 ----------------------------------------------------- 00:21:13.437 Suppressions used: 00:21:13.437 count bytes template 00:21:13.437 1 11 /usr/src/fio/parse.c 00:21:13.437 1 8 libtcmalloc_minimal.so 00:21:13.437 1 904 libcrypto.so 00:21:13.437 ----------------------------------------------------- 00:21:13.437 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:13.437 13:15:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:13.437 { 00:21:13.437 "subsystems": [ 00:21:13.437 { 00:21:13.437 "subsystem": "bdev", 00:21:13.437 "config": [ 00:21:13.437 { 00:21:13.437 "params": { 00:21:13.437 "io_mechanism": "io_uring", 00:21:13.437 "conserve_cpu": false, 00:21:13.437 "filename": "/dev/nvme0n1", 00:21:13.437 "name": "xnvme_bdev" 00:21:13.437 }, 00:21:13.437 "method": "bdev_xnvme_create" 00:21:13.437 }, 00:21:13.437 { 00:21:13.437 "method": "bdev_wait_for_examine" 00:21:13.437 } 00:21:13.437 ] 00:21:13.437 } 00:21:13.437 ] 00:21:13.437 } 00:21:13.694 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:13.694 fio-3.35 00:21:13.694 Starting 1 thread 00:21:20.291 00:21:20.291 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72230: Fri Dec 6 13:15:25 2024 00:21:20.291 write: IOPS=40.0k, BW=156MiB/s (164MB/s)(782MiB/5002msec); 0 zone resets 00:21:20.291 slat (nsec): min=2894, max=71375, avg=5266.69, stdev=2976.17 00:21:20.291 clat (usec): min=94, max=9532, avg=1392.40, stdev=408.52 00:21:20.291 lat (usec): min=100, max=9537, avg=1397.67, stdev=409.23 00:21:20.291 clat percentiles (usec): 00:21:20.291 | 1.00th=[ 848], 5.00th=[ 1004], 10.00th=[ 1057], 20.00th=[ 1139], 00:21:20.291 | 30.00th=[ 1205], 40.00th=[ 1270], 50.00th=[ 1336], 60.00th=[ 1401], 00:21:20.291 | 70.00th=[ 1483], 80.00th=[ 1582], 90.00th=[ 1745], 95.00th=[ 1893], 00:21:20.291 | 99.00th=[ 2671], 99.50th=[ 3458], 99.90th=[ 6128], 99.95th=[ 7439], 00:21:20.291 | 99.99th=[ 8979] 00:21:20.291 bw ( KiB/s): min=145024, max=196096, per=100.00%, avg=161703.11, stdev=20018.22, samples=9 00:21:20.291 iops : min=36256, max=49024, avg=40425.78, stdev=5004.55, samples=9 00:21:20.291 lat (usec) : 100=0.01%, 250=0.08%, 500=0.25%, 750=0.38%, 1000=3.82% 00:21:20.291 lat (msec) : 2=91.99%, 4=3.12%, 10=0.36% 00:21:20.291 cpu : usr=38.23%, sys=60.45%, ctx=6, majf=0, minf=763 00:21:20.291 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=24.0%, 32=52.0%, >=64=1.8% 00:21:20.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.291 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:21:20.291 issued rwts: total=0,200261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.291 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.291 00:21:20.291 Run status group 0 (all jobs): 00:21:20.291 WRITE: bw=156MiB/s (164MB/s), 156MiB/s-156MiB/s (164MB/s-164MB/s), io=782MiB (820MB), run=5002-5002msec 00:21:20.860 ----------------------------------------------------- 00:21:20.860 Suppressions used: 00:21:20.860 count bytes template 00:21:20.860 1 11 /usr/src/fio/parse.c 00:21:20.860 1 8 libtcmalloc_minimal.so 00:21:20.860 1 904 libcrypto.so 00:21:20.860 ----------------------------------------------------- 00:21:20.860 00:21:20.860 ************************************ 00:21:20.860 END TEST xnvme_fio_plugin 00:21:20.860 ************************************ 00:21:20.860 00:21:20.860 real 0m14.758s 00:21:20.860 user 0m7.632s 00:21:20.860 sys 0m6.718s 00:21:20.860 13:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.860 13:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:20.860 13:15:27 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:20.860 13:15:27 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:21:20.860 13:15:27 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:21:20.860 13:15:27 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:20.860 13:15:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:20.860 13:15:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.860 13:15:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:20.860 ************************************ 00:21:20.860 START TEST xnvme_rpc 00:21:20.860 ************************************ 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72322 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72322 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72322 ']' 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.860 13:15:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.860 [2024-12-06 13:15:27.359748] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:20.860 [2024-12-06 13:15:27.359974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72322 ] 00:21:21.125 [2024-12-06 13:15:27.529700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.125 [2024-12-06 13:15:27.624478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.060 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.060 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:22.060 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:21:22.060 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.060 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.060 xnvme_bdev 00:21:22.060 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.060 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:22.060 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:22.060 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:22.060 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:22.061 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72322 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72322 ']' 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72322 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72322 00:21:22.320 killing process with pid 72322 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72322' 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72322 00:21:22.320 13:15:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72322 00:21:24.271 00:21:24.271 real 0m3.498s 00:21:24.271 user 0m3.804s 00:21:24.271 sys 0m0.425s 00:21:24.271 13:15:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.271 13:15:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.271 ************************************ 00:21:24.271 END TEST xnvme_rpc 00:21:24.271 ************************************ 00:21:24.271 13:15:30 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:24.271 13:15:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:24.271 13:15:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.271 13:15:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:24.538 ************************************ 00:21:24.538 START TEST xnvme_bdevperf 00:21:24.538 ************************************ 00:21:24.538 13:15:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:24.538 13:15:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:24.538 13:15:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:21:24.538 13:15:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:24.538 13:15:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:24.538 13:15:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:24.538 13:15:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:24.538 13:15:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:24.538 { 00:21:24.538 "subsystems": [ 00:21:24.538 { 00:21:24.538 "subsystem": "bdev", 00:21:24.538 "config": [ 00:21:24.538 { 00:21:24.538 "params": { 00:21:24.538 "io_mechanism": "io_uring", 00:21:24.538 "conserve_cpu": true, 00:21:24.538 "filename": "/dev/nvme0n1", 00:21:24.538 "name": "xnvme_bdev" 00:21:24.538 }, 00:21:24.538 "method": "bdev_xnvme_create" 00:21:24.538 }, 00:21:24.538 { 00:21:24.538 "method": "bdev_wait_for_examine" 00:21:24.538 } 00:21:24.538 ] 00:21:24.538 } 00:21:24.538 ] 00:21:24.538 } 00:21:24.538 [2024-12-06 13:15:30.889063] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:24.538 [2024-12-06 13:15:30.889245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72396 ] 00:21:24.797 [2024-12-06 13:15:31.075558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.797 [2024-12-06 13:15:31.237302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.055 Running I/O for 5 seconds... 00:21:27.360 55432.00 IOPS, 216.53 MiB/s [2024-12-06T13:15:34.822Z] 54500.00 IOPS, 212.89 MiB/s [2024-12-06T13:15:35.755Z] 54305.67 IOPS, 212.13 MiB/s [2024-12-06T13:15:36.687Z] 53649.25 IOPS, 209.57 MiB/s [2024-12-06T13:15:36.687Z] 53420.80 IOPS, 208.68 MiB/s 00:21:30.159 Latency(us) 00:21:30.159 [2024-12-06T13:15:36.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.159 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:21:30.159 xnvme_bdev : 5.00 53399.63 208.59 0.00 0.00 1194.59 262.52 4617.31 00:21:30.159 [2024-12-06T13:15:36.687Z] =================================================================================================================== 00:21:30.159 [2024-12-06T13:15:36.687Z] Total : 53399.63 208.59 0.00 0.00 1194.59 262.52 4617.31 00:21:31.096 13:15:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:31.096 13:15:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:21:31.096 13:15:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:31.096 13:15:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:31.096 13:15:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:31.353 { 00:21:31.353 "subsystems": [ 00:21:31.353 { 00:21:31.353 "subsystem": "bdev", 00:21:31.353 "config": [ 00:21:31.353 { 00:21:31.353 "params": { 00:21:31.353 "io_mechanism": "io_uring", 00:21:31.354 "conserve_cpu": true, 00:21:31.354 "filename": "/dev/nvme0n1", 00:21:31.354 "name": "xnvme_bdev" 00:21:31.354 }, 00:21:31.354 "method": "bdev_xnvme_create" 00:21:31.354 }, 00:21:31.354 { 00:21:31.354 "method": "bdev_wait_for_examine" 00:21:31.354 } 00:21:31.354 ] 00:21:31.354 } 00:21:31.354 ] 00:21:31.354 } 00:21:31.354 [2024-12-06 13:15:37.693058] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:31.354 [2024-12-06 13:15:37.693246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72482 ] 00:21:31.354 [2024-12-06 13:15:37.879623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.611 [2024-12-06 13:15:37.987689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.868 Running I/O for 5 seconds... 00:21:33.789 38719.00 IOPS, 151.25 MiB/s [2024-12-06T13:15:41.757Z] 39167.50 IOPS, 153.00 MiB/s [2024-12-06T13:15:42.351Z] 38719.33 IOPS, 151.25 MiB/s [2024-12-06T13:15:43.726Z] 38975.50 IOPS, 152.25 MiB/s 00:21:37.198 Latency(us) 00:21:37.198 [2024-12-06T13:15:43.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.198 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:37.198 xnvme_bdev : 5.00 39619.87 154.77 0.00 0.00 1609.68 878.78 5213.09 00:21:37.198 [2024-12-06T13:15:43.726Z] =================================================================================================================== 00:21:37.198 [2024-12-06T13:15:43.726Z] Total : 39619.87 154.77 0.00 0.00 1609.68 878.78 5213.09 00:21:38.133 00:21:38.133 real 0m13.614s 00:21:38.133 user 0m9.047s 00:21:38.133 sys 0m4.020s 00:21:38.133 13:15:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.133 13:15:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:38.133 ************************************ 00:21:38.133 END TEST xnvme_bdevperf 00:21:38.133 ************************************ 00:21:38.133 13:15:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:38.133 13:15:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:38.133 13:15:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.133 13:15:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:38.133 ************************************ 00:21:38.133 START TEST xnvme_fio_plugin 00:21:38.133 ************************************ 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:38.133 13:15:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:38.133 { 00:21:38.133 "subsystems": [ 00:21:38.133 { 00:21:38.133 "subsystem": "bdev", 00:21:38.133 "config": [ 00:21:38.133 { 00:21:38.133 "params": { 00:21:38.133 "io_mechanism": "io_uring", 00:21:38.133 "conserve_cpu": true, 00:21:38.133 "filename": "/dev/nvme0n1", 00:21:38.133 "name": "xnvme_bdev" 00:21:38.133 }, 00:21:38.133 "method": "bdev_xnvme_create" 00:21:38.133 }, 00:21:38.133 { 00:21:38.133 "method": "bdev_wait_for_examine" 00:21:38.133 } 00:21:38.133 ] 00:21:38.133 } 00:21:38.133 ] 00:21:38.133 } 00:21:38.391 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:38.391 fio-3.35 00:21:38.391 Starting 1 thread 00:21:44.953 00:21:44.953 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72597: Fri Dec 6 13:15:50 2024 00:21:44.953 read: IOPS=51.9k, BW=203MiB/s (213MB/s)(1014MiB/5001msec) 00:21:44.953 slat (usec): min=3, max=289, avg= 3.80, stdev= 1.60 00:21:44.953 clat (usec): min=604, max=7845, avg=1080.25, stdev=198.12 00:21:44.953 lat (usec): min=608, max=7853, avg=1084.04, stdev=198.52 00:21:44.953 clat percentiles (usec): 00:21:44.953 | 1.00th=[ 840], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 947], 00:21:44.953 | 30.00th=[ 988], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1090], 00:21:44.953 | 70.00th=[ 1123], 80.00th=[ 1188], 90.00th=[ 1270], 95.00th=[ 1369], 00:21:44.953 | 99.00th=[ 1631], 99.50th=[ 1729], 99.90th=[ 2933], 99.95th=[ 3785], 00:21:44.953 | 99.99th=[ 7767] 00:21:44.953 bw ( KiB/s): min=190464, max=227328, per=100.00%, avg=208768.00, stdev=14989.30, samples=9 00:21:44.954 iops : min=47616, max=56832, avg=52192.00, stdev=3747.32, samples=9 00:21:44.954 lat (usec) : 750=0.03%, 1000=33.46% 00:21:44.954 lat (msec) : 2=66.36%, 4=0.13%, 10=0.03% 00:21:44.954 cpu : usr=64.86%, sys=30.90%, ctx=15, majf=0, minf=762 00:21:44.954 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:44.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.954 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:21:44.954 issued rwts: total=259500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.954 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:44.954 00:21:44.954 Run status group 0 (all jobs): 00:21:44.954 READ: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=1014MiB (1063MB), run=5001-5001msec 00:21:45.519 ----------------------------------------------------- 00:21:45.519 Suppressions used: 00:21:45.519 count bytes template 00:21:45.519 1 11 /usr/src/fio/parse.c 00:21:45.519 1 8 libtcmalloc_minimal.so 00:21:45.519 1 904 libcrypto.so 00:21:45.519 ----------------------------------------------------- 00:21:45.519 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:45.519 13:15:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:45.519 { 00:21:45.519 "subsystems": [ 00:21:45.519 { 00:21:45.519 "subsystem": "bdev", 00:21:45.519 "config": [ 00:21:45.519 { 00:21:45.519 "params": { 00:21:45.519 "io_mechanism": "io_uring", 00:21:45.519 "conserve_cpu": true, 00:21:45.519 "filename": "/dev/nvme0n1", 00:21:45.519 "name": "xnvme_bdev" 00:21:45.519 }, 00:21:45.519 "method": "bdev_xnvme_create" 00:21:45.519 }, 00:21:45.519 { 00:21:45.519 "method": "bdev_wait_for_examine" 00:21:45.519 } 00:21:45.519 ] 00:21:45.519 } 00:21:45.519 ] 00:21:45.519 } 00:21:45.832 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:45.832 fio-3.35 00:21:45.832 Starting 1 thread 00:21:52.389 00:21:52.389 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72702: Fri Dec 6 13:15:57 2024 00:21:52.389 write: IOPS=49.8k, BW=195MiB/s (204MB/s)(973MiB/5001msec); 0 zone resets 00:21:52.389 slat (usec): min=3, max=113, avg= 4.08, stdev= 1.71 00:21:52.389 clat (usec): min=769, max=2419, avg=1120.62, stdev=182.81 00:21:52.389 lat (usec): min=773, max=2449, avg=1124.70, stdev=183.42 00:21:52.389 clat percentiles (usec): 00:21:52.389 | 1.00th=[ 857], 5.00th=[ 906], 10.00th=[ 938], 20.00th=[ 979], 00:21:52.389 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1123], 00:21:52.389 | 70.00th=[ 1156], 80.00th=[ 1221], 90.00th=[ 1352], 95.00th=[ 1500], 00:21:52.389 | 99.00th=[ 1762], 99.50th=[ 1860], 99.90th=[ 2073], 99.95th=[ 2180], 00:21:52.389 | 99.99th=[ 2311] 00:21:52.389 bw ( KiB/s): min=187392, max=210432, per=100.00%, avg=200533.33, stdev=7384.17, samples=9 00:21:52.389 iops : min=46848, max=52608, avg=50133.33, stdev=1846.04, samples=9 00:21:52.389 lat (usec) : 1000=24.64% 00:21:52.389 lat (msec) : 2=75.18%, 4=0.18% 00:21:52.389 cpu : usr=68.36%, sys=27.66%, ctx=17, majf=0, minf=763 00:21:52.389 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:52.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.389 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:52.389 issued rwts: total=0,249152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.389 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.389 00:21:52.389 Run status group 0 (all jobs): 00:21:52.389 WRITE: bw=195MiB/s (204MB/s), 195MiB/s-195MiB/s (204MB/s-204MB/s), io=973MiB (1021MB), run=5001-5001msec 00:21:52.648 ----------------------------------------------------- 00:21:52.648 Suppressions used: 00:21:52.648 count bytes template 00:21:52.648 1 11 /usr/src/fio/parse.c 00:21:52.648 1 8 libtcmalloc_minimal.so 00:21:52.648 1 904 libcrypto.so 00:21:52.648 ----------------------------------------------------- 00:21:52.648 00:21:52.648 00:21:52.648 real 0m14.630s 00:21:52.648 user 0m10.378s 00:21:52.648 sys 0m3.560s 00:21:52.648 13:15:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.648 13:15:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:52.648 ************************************ 00:21:52.648 END TEST xnvme_fio_plugin 00:21:52.648 ************************************ 00:21:52.648 13:15:59 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:21:52.648 13:15:59 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:21:52.648 13:15:59 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:21:52.648 13:15:59 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:21:52.648 13:15:59 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:21:52.648 13:15:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:52.648 13:15:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:21:52.648 13:15:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:21:52.648 13:15:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:52.648 13:15:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:52.648 13:15:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.648 13:15:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:52.648 ************************************ 00:21:52.648 START TEST xnvme_rpc 00:21:52.648 ************************************ 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72784 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72784 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72784 ']' 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.648 13:15:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:52.907 [2024-12-06 13:15:59.235791] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:52.907 [2024-12-06 13:15:59.235971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72784 ] 00:21:52.907 [2024-12-06 13:15:59.404438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.168 [2024-12-06 13:15:59.507533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.123 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.123 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:54.123 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:21:54.123 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.123 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:54.123 xnvme_bdev 00:21:54.123 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.123 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72784 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72784 ']' 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72784 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72784 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.124 killing process with pid 72784 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72784' 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72784 00:21:54.124 13:16:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72784 00:21:56.692 00:21:56.692 real 0m3.490s 00:21:56.692 user 0m3.812s 00:21:56.692 sys 0m0.430s 00:21:56.692 13:16:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:56.692 13:16:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:56.692 ************************************ 00:21:56.692 END TEST xnvme_rpc 00:21:56.692 ************************************ 00:21:56.692 13:16:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:56.692 13:16:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:56.692 13:16:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.692 13:16:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:56.692 ************************************ 00:21:56.692 START TEST xnvme_bdevperf 00:21:56.692 ************************************ 00:21:56.692 13:16:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:56.692 13:16:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:56.692 13:16:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:21:56.692 13:16:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:56.692 13:16:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:56.692 13:16:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:56.692 13:16:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:56.692 13:16:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:56.692 { 00:21:56.692 "subsystems": [ 00:21:56.692 { 00:21:56.692 "subsystem": "bdev", 00:21:56.692 "config": [ 00:21:56.692 { 00:21:56.692 "params": { 00:21:56.692 "io_mechanism": "io_uring_cmd", 00:21:56.692 "conserve_cpu": false, 00:21:56.692 "filename": "/dev/ng0n1", 00:21:56.692 "name": "xnvme_bdev" 00:21:56.692 }, 00:21:56.692 "method": "bdev_xnvme_create" 00:21:56.692 }, 00:21:56.692 { 00:21:56.692 "method": "bdev_wait_for_examine" 00:21:56.692 } 00:21:56.692 ] 00:21:56.692 } 00:21:56.692 ] 00:21:56.692 } 00:21:56.692 [2024-12-06 13:16:02.767726] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:56.692 [2024-12-06 13:16:02.767932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72860 ] 00:21:56.692 [2024-12-06 13:16:02.951596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.692 [2024-12-06 13:16:03.055828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.952 Running I/O for 5 seconds... 00:21:59.259 54460.00 IOPS, 212.73 MiB/s [2024-12-06T13:16:06.721Z] 52280.00 IOPS, 204.22 MiB/s [2024-12-06T13:16:07.656Z] 52045.33 IOPS, 203.30 MiB/s [2024-12-06T13:16:08.591Z] 52410.00 IOPS, 204.73 MiB/s 00:22:02.063 Latency(us) 00:22:02.063 [2024-12-06T13:16:08.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.063 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:22:02.063 xnvme_bdev : 5.00 52463.69 204.94 0.00 0.00 1216.00 625.57 4974.78 00:22:02.063 [2024-12-06T13:16:08.591Z] =================================================================================================================== 00:22:02.063 [2024-12-06T13:16:08.591Z] Total : 52463.69 204.94 0.00 0.00 1216.00 625.57 4974.78 00:22:02.997 13:16:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:02.998 13:16:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:02.998 13:16:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:22:02.998 13:16:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:02.998 13:16:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:03.257 { 00:22:03.257 "subsystems": [ 00:22:03.257 { 00:22:03.257 "subsystem": "bdev", 00:22:03.257 "config": [ 00:22:03.257 { 00:22:03.257 "params": { 00:22:03.257 "io_mechanism": "io_uring_cmd", 00:22:03.257 "conserve_cpu": false, 00:22:03.257 "filename": "/dev/ng0n1", 00:22:03.257 "name": "xnvme_bdev" 00:22:03.257 }, 00:22:03.257 "method": "bdev_xnvme_create" 00:22:03.257 }, 00:22:03.257 { 00:22:03.257 "method": "bdev_wait_for_examine" 00:22:03.257 } 00:22:03.257 ] 00:22:03.257 } 00:22:03.257 ] 00:22:03.257 } 00:22:03.257 [2024-12-06 13:16:09.562054] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:03.257 [2024-12-06 13:16:09.562225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72941 ] 00:22:03.257 [2024-12-06 13:16:09.738565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.515 [2024-12-06 13:16:09.842726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.773 Running I/O for 5 seconds... 00:22:05.638 50816.00 IOPS, 198.50 MiB/s [2024-12-06T13:16:13.546Z] 50048.00 IOPS, 195.50 MiB/s [2024-12-06T13:16:14.482Z] 47338.67 IOPS, 184.92 MiB/s [2024-12-06T13:16:15.417Z] 48400.00 IOPS, 189.06 MiB/s [2024-12-06T13:16:15.417Z] 48499.20 IOPS, 189.45 MiB/s 00:22:08.889 Latency(us) 00:22:08.889 [2024-12-06T13:16:15.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.889 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:22:08.889 xnvme_bdev : 5.00 48484.33 189.39 0.00 0.00 1315.58 789.41 3723.64 00:22:08.889 [2024-12-06T13:16:15.417Z] =================================================================================================================== 00:22:08.889 [2024-12-06T13:16:15.417Z] Total : 48484.33 189.39 0.00 0.00 1315.58 789.41 3723.64 00:22:09.821 13:16:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:09.821 13:16:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:22:09.821 13:16:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:09.821 13:16:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:09.821 13:16:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:09.821 { 00:22:09.821 "subsystems": [ 00:22:09.821 { 00:22:09.821 "subsystem": "bdev", 00:22:09.822 "config": [ 00:22:09.822 { 00:22:09.822 "params": { 00:22:09.822 "io_mechanism": "io_uring_cmd", 00:22:09.822 "conserve_cpu": false, 00:22:09.822 "filename": "/dev/ng0n1", 00:22:09.822 "name": "xnvme_bdev" 00:22:09.822 }, 00:22:09.822 "method": "bdev_xnvme_create" 00:22:09.822 }, 00:22:09.822 { 00:22:09.822 "method": "bdev_wait_for_examine" 00:22:09.822 } 00:22:09.822 ] 00:22:09.822 } 00:22:09.822 ] 00:22:09.822 } 00:22:09.822 [2024-12-06 13:16:16.294830] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:09.822 [2024-12-06 13:16:16.294991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73016 ] 00:22:10.080 [2024-12-06 13:16:16.474891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.080 [2024-12-06 13:16:16.598597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.664 Running I/O for 5 seconds... 00:22:12.537 70528.00 IOPS, 275.50 MiB/s [2024-12-06T13:16:19.999Z] 68416.00 IOPS, 267.25 MiB/s [2024-12-06T13:16:20.931Z] 69226.67 IOPS, 270.42 MiB/s [2024-12-06T13:16:22.305Z] 68720.00 IOPS, 268.44 MiB/s 00:22:15.777 Latency(us) 00:22:15.777 [2024-12-06T13:16:22.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.777 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:22:15.777 xnvme_bdev : 5.00 68445.78 267.37 0.00 0.00 930.92 539.93 3351.27 00:22:15.777 [2024-12-06T13:16:22.305Z] =================================================================================================================== 00:22:15.777 [2024-12-06T13:16:22.305Z] Total : 68445.78 267.37 0.00 0.00 930.92 539.93 3351.27 00:22:16.712 13:16:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:16.712 13:16:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:22:16.712 13:16:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:16.712 13:16:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:16.712 13:16:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:16.712 { 00:22:16.712 "subsystems": [ 00:22:16.712 { 00:22:16.712 "subsystem": "bdev", 00:22:16.712 "config": [ 00:22:16.712 { 00:22:16.712 "params": { 00:22:16.712 "io_mechanism": "io_uring_cmd", 00:22:16.712 "conserve_cpu": false, 00:22:16.712 "filename": "/dev/ng0n1", 00:22:16.712 "name": "xnvme_bdev" 00:22:16.712 }, 00:22:16.712 "method": "bdev_xnvme_create" 00:22:16.712 }, 00:22:16.712 { 00:22:16.712 "method": "bdev_wait_for_examine" 00:22:16.712 } 00:22:16.712 ] 00:22:16.712 } 00:22:16.712 ] 00:22:16.712 } 00:22:16.712 [2024-12-06 13:16:23.035057] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:16.712 [2024-12-06 13:16:23.035279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73090 ] 00:22:16.712 [2024-12-06 13:16:23.215692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.971 [2024-12-06 13:16:23.319543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.229 Running I/O for 5 seconds... 00:22:19.109 39233.00 IOPS, 153.25 MiB/s [2024-12-06T13:16:27.009Z] 39366.00 IOPS, 153.77 MiB/s [2024-12-06T13:16:27.942Z] 40032.00 IOPS, 156.38 MiB/s [2024-12-06T13:16:28.876Z] 40258.50 IOPS, 157.26 MiB/s [2024-12-06T13:16:28.876Z] 40917.00 IOPS, 159.83 MiB/s 00:22:22.348 Latency(us) 00:22:22.348 [2024-12-06T13:16:28.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.348 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:22:22.348 xnvme_bdev : 5.00 40902.62 159.78 0.00 0.00 1560.47 91.23 10366.60 00:22:22.348 [2024-12-06T13:16:28.876Z] =================================================================================================================== 00:22:22.348 [2024-12-06T13:16:28.876Z] Total : 40902.62 159.78 0.00 0.00 1560.47 91.23 10366.60 00:22:23.284 00:22:23.284 real 0m27.081s 00:22:23.284 user 0m15.799s 00:22:23.284 sys 0m10.841s 00:22:23.284 13:16:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.284 13:16:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:23.284 ************************************ 00:22:23.284 END TEST xnvme_bdevperf 00:22:23.284 ************************************ 00:22:23.284 13:16:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:22:23.284 13:16:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:23.284 13:16:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.284 13:16:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:23.284 ************************************ 00:22:23.284 START TEST xnvme_fio_plugin 00:22:23.284 ************************************ 00:22:23.284 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:22:23.284 13:16:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:22:23.284 13:16:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:22:23.284 13:16:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:23.284 13:16:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:23.284 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:23.284 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:23.284 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:23.284 13:16:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:23.285 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:23.285 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:23.285 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:23.285 13:16:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:23.285 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:23.285 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:23.285 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:23.285 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:23.285 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:23.285 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:23.544 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:23.544 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:23.544 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:23.544 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:23.544 13:16:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:23.544 { 00:22:23.544 "subsystems": [ 00:22:23.544 { 00:22:23.544 "subsystem": "bdev", 00:22:23.544 "config": [ 00:22:23.544 { 00:22:23.544 "params": { 00:22:23.544 "io_mechanism": "io_uring_cmd", 00:22:23.544 "conserve_cpu": false, 00:22:23.544 "filename": "/dev/ng0n1", 00:22:23.544 "name": "xnvme_bdev" 00:22:23.544 }, 00:22:23.544 "method": "bdev_xnvme_create" 00:22:23.544 }, 00:22:23.544 { 00:22:23.544 "method": "bdev_wait_for_examine" 00:22:23.544 } 00:22:23.544 ] 00:22:23.544 } 00:22:23.544 ] 00:22:23.544 } 00:22:23.544 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:23.544 fio-3.35 00:22:23.544 Starting 1 thread 00:22:30.137 00:22:30.137 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73213: Fri Dec 6 13:16:35 2024 00:22:30.137 read: IOPS=48.4k, BW=189MiB/s (198MB/s)(946MiB/5001msec) 00:22:30.137 slat (usec): min=3, max=121, avg= 4.00, stdev= 1.60 00:22:30.137 clat (usec): min=761, max=3497, avg=1161.27, stdev=182.46 00:22:30.137 lat (usec): min=765, max=3506, avg=1165.27, stdev=182.87 00:22:30.137 clat percentiles (usec): 00:22:30.137 | 1.00th=[ 873], 5.00th=[ 930], 10.00th=[ 963], 20.00th=[ 1012], 00:22:30.137 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1172], 00:22:30.137 | 70.00th=[ 1221], 80.00th=[ 1303], 90.00th=[ 1418], 95.00th=[ 1500], 00:22:30.137 | 99.00th=[ 1680], 99.50th=[ 1762], 99.90th=[ 1975], 99.95th=[ 2376], 00:22:30.137 | 99.99th=[ 3359] 00:22:30.137 bw ( KiB/s): min=174080, max=209920, per=99.71%, avg=193137.78, stdev=11629.78, samples=9 00:22:30.137 iops : min=43520, max=52480, avg=48284.44, stdev=2907.44, samples=9 00:22:30.137 lat (usec) : 1000=17.70% 00:22:30.137 lat (msec) : 2=82.21%, 4=0.09% 00:22:30.137 cpu : usr=41.68%, sys=57.40%, ctx=11, majf=0, minf=762 00:22:30.137 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:30.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.137 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:30.137 issued rwts: total=242176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.137 00:22:30.137 Run status group 0 (all jobs): 00:22:30.137 READ: bw=189MiB/s (198MB/s), 189MiB/s-189MiB/s (198MB/s-198MB/s), io=946MiB (992MB), run=5001-5001msec 00:22:30.703 ----------------------------------------------------- 00:22:30.703 Suppressions used: 00:22:30.703 count bytes template 00:22:30.703 1 11 /usr/src/fio/parse.c 00:22:30.703 1 8 libtcmalloc_minimal.so 00:22:30.703 1 904 libcrypto.so 00:22:30.703 ----------------------------------------------------- 00:22:30.703 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:30.703 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:30.704 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:30.704 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:30.704 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:30.704 13:16:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:30.704 { 00:22:30.704 "subsystems": [ 00:22:30.704 { 00:22:30.704 "subsystem": "bdev", 00:22:30.704 "config": [ 00:22:30.704 { 00:22:30.704 "params": { 00:22:30.704 "io_mechanism": "io_uring_cmd", 00:22:30.704 "conserve_cpu": false, 00:22:30.704 "filename": "/dev/ng0n1", 00:22:30.704 "name": "xnvme_bdev" 00:22:30.704 }, 00:22:30.704 "method": "bdev_xnvme_create" 00:22:30.704 }, 00:22:30.704 { 00:22:30.704 "method": "bdev_wait_for_examine" 00:22:30.704 } 00:22:30.704 ] 00:22:30.704 } 00:22:30.704 ] 00:22:30.704 } 00:22:30.962 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:30.962 fio-3.35 00:22:30.962 Starting 1 thread 00:22:37.519 00:22:37.519 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73305: Fri Dec 6 13:16:42 2024 00:22:37.519 write: IOPS=45.6k, BW=178MiB/s (187MB/s)(890MiB/5001msec); 0 zone resets 00:22:37.519 slat (usec): min=3, max=726, avg= 4.59, stdev= 2.78 00:22:37.519 clat (usec): min=736, max=4820, avg=1219.11, stdev=225.54 00:22:37.519 lat (usec): min=739, max=4835, avg=1223.70, stdev=226.36 00:22:37.519 clat percentiles (usec): 00:22:37.519 | 1.00th=[ 857], 5.00th=[ 922], 10.00th=[ 971], 20.00th=[ 1029], 00:22:37.519 | 30.00th=[ 1090], 40.00th=[ 1139], 50.00th=[ 1188], 60.00th=[ 1254], 00:22:37.519 | 70.00th=[ 1303], 80.00th=[ 1385], 90.00th=[ 1500], 95.00th=[ 1598], 00:22:37.519 | 99.00th=[ 1860], 99.50th=[ 1975], 99.90th=[ 2376], 99.95th=[ 3163], 00:22:37.519 | 99.99th=[ 4686] 00:22:37.519 bw ( KiB/s): min=162816, max=199680, per=99.09%, avg=180614.22, stdev=12594.48, samples=9 00:22:37.519 iops : min=40704, max=49920, avg=45153.56, stdev=3148.62, samples=9 00:22:37.519 lat (usec) : 750=0.01%, 1000=14.70% 00:22:37.519 lat (msec) : 2=84.87%, 4=0.40%, 10=0.03% 00:22:37.519 cpu : usr=43.16%, sys=55.70%, ctx=24, majf=0, minf=763 00:22:37.519 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:37.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.519 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:22:37.519 issued rwts: total=0,227895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.519 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:37.519 00:22:37.519 Run status group 0 (all jobs): 00:22:37.519 WRITE: bw=178MiB/s (187MB/s), 178MiB/s-178MiB/s (187MB/s-187MB/s), io=890MiB (933MB), run=5001-5001msec 00:22:37.787 ----------------------------------------------------- 00:22:37.787 Suppressions used: 00:22:37.787 count bytes template 00:22:37.787 1 11 /usr/src/fio/parse.c 00:22:37.787 1 8 libtcmalloc_minimal.so 00:22:37.787 1 904 libcrypto.so 00:22:37.787 ----------------------------------------------------- 00:22:37.787 00:22:37.787 00:22:37.787 real 0m14.503s 00:22:37.787 user 0m7.852s 00:22:37.787 sys 0m6.272s 00:22:37.787 13:16:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.787 ************************************ 00:22:37.787 END TEST xnvme_fio_plugin 00:22:37.787 13:16:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:37.787 ************************************ 00:22:38.046 13:16:44 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:22:38.046 13:16:44 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:22:38.046 13:16:44 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:22:38.046 13:16:44 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:22:38.046 13:16:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:38.046 13:16:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.046 13:16:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:38.046 ************************************ 00:22:38.046 START TEST xnvme_rpc 00:22:38.046 ************************************ 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73387 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73387 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73387 ']' 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.046 13:16:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:38.046 [2024-12-06 13:16:44.478388] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:38.046 [2024-12-06 13:16:44.478575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73387 ] 00:22:38.304 [2024-12-06 13:16:44.669618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.304 [2024-12-06 13:16:44.797645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.238 xnvme_bdev 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:22:39.238 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.496 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.496 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:22:39.496 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:22:39.496 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:39.496 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.496 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:22:39.496 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.496 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.496 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73387 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73387 ']' 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73387 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73387 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73387' 00:22:39.497 killing process with pid 73387 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73387 00:22:39.497 13:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73387 00:22:42.031 00:22:42.031 real 0m3.731s 00:22:42.031 user 0m4.078s 00:22:42.031 sys 0m0.469s 00:22:42.031 13:16:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.031 13:16:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:42.031 ************************************ 00:22:42.031 END TEST xnvme_rpc 00:22:42.031 ************************************ 00:22:42.031 13:16:48 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:22:42.031 13:16:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:42.031 13:16:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.031 13:16:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:42.031 ************************************ 00:22:42.031 START TEST xnvme_bdevperf 00:22:42.031 ************************************ 00:22:42.031 13:16:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:22:42.031 13:16:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:22:42.031 13:16:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:22:42.031 13:16:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:42.031 13:16:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:22:42.031 13:16:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:42.031 13:16:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:42.031 13:16:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:42.031 { 00:22:42.031 "subsystems": [ 00:22:42.031 { 00:22:42.031 "subsystem": "bdev", 00:22:42.031 "config": [ 00:22:42.031 { 00:22:42.031 "params": { 00:22:42.031 "io_mechanism": "io_uring_cmd", 00:22:42.031 "conserve_cpu": true, 00:22:42.031 "filename": "/dev/ng0n1", 00:22:42.031 "name": "xnvme_bdev" 00:22:42.031 }, 00:22:42.031 "method": "bdev_xnvme_create" 00:22:42.031 }, 00:22:42.031 { 00:22:42.031 "method": "bdev_wait_for_examine" 00:22:42.031 } 00:22:42.031 ] 00:22:42.031 } 00:22:42.031 ] 00:22:42.031 } 00:22:42.031 [2024-12-06 13:16:48.244255] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:42.031 [2024-12-06 13:16:48.244478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73467 ] 00:22:42.031 [2024-12-06 13:16:48.419882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.289 [2024-12-06 13:16:48.609426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.547 Running I/O for 5 seconds... 00:22:44.415 55936.00 IOPS, 218.50 MiB/s [2024-12-06T13:16:52.320Z] 56318.00 IOPS, 219.99 MiB/s [2024-12-06T13:16:53.254Z] 54569.00 IOPS, 213.16 MiB/s [2024-12-06T13:16:54.189Z] 53102.75 IOPS, 207.43 MiB/s 00:22:47.661 Latency(us) 00:22:47.661 [2024-12-06T13:16:54.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.661 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:22:47.661 xnvme_bdev : 5.00 51972.33 203.02 0.00 0.00 1227.35 804.31 7238.75 00:22:47.661 [2024-12-06T13:16:54.189Z] =================================================================================================================== 00:22:47.661 [2024-12-06T13:16:54.189Z] Total : 51972.33 203.02 0.00 0.00 1227.35 804.31 7238.75 00:22:48.596 13:16:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:48.596 13:16:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:22:48.596 13:16:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:48.596 13:16:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:48.596 13:16:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:48.596 { 00:22:48.596 "subsystems": [ 00:22:48.596 { 00:22:48.596 "subsystem": "bdev", 00:22:48.596 "config": [ 00:22:48.596 { 00:22:48.596 "params": { 00:22:48.596 "io_mechanism": "io_uring_cmd", 00:22:48.596 "conserve_cpu": true, 00:22:48.596 "filename": "/dev/ng0n1", 00:22:48.596 "name": "xnvme_bdev" 00:22:48.596 }, 00:22:48.596 "method": "bdev_xnvme_create" 00:22:48.596 }, 00:22:48.596 { 00:22:48.596 "method": "bdev_wait_for_examine" 00:22:48.596 } 00:22:48.596 ] 00:22:48.596 } 00:22:48.596 ] 00:22:48.596 } 00:22:48.596 [2024-12-06 13:16:55.068411] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:48.596 [2024-12-06 13:16:55.068566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73548 ] 00:22:48.853 [2024-12-06 13:16:55.241381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.853 [2024-12-06 13:16:55.344292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.425 Running I/O for 5 seconds... 00:22:51.325 49536.00 IOPS, 193.50 MiB/s [2024-12-06T13:16:58.785Z] 47712.00 IOPS, 186.38 MiB/s [2024-12-06T13:16:59.723Z] 46954.67 IOPS, 183.42 MiB/s [2024-12-06T13:17:01.097Z] 47648.00 IOPS, 186.12 MiB/s 00:22:54.569 Latency(us) 00:22:54.569 [2024-12-06T13:17:01.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.569 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:22:54.569 xnvme_bdev : 5.00 47931.47 187.23 0.00 0.00 1330.73 796.86 4200.26 00:22:54.569 [2024-12-06T13:17:01.097Z] =================================================================================================================== 00:22:54.569 [2024-12-06T13:17:01.097Z] Total : 47931.47 187.23 0.00 0.00 1330.73 796.86 4200.26 00:22:55.135 13:17:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:55.392 13:17:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:55.392 13:17:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:22:55.392 13:17:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:55.392 13:17:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:55.392 { 00:22:55.392 "subsystems": [ 00:22:55.392 { 00:22:55.392 "subsystem": "bdev", 00:22:55.392 "config": [ 00:22:55.392 { 00:22:55.392 "params": { 00:22:55.392 "io_mechanism": "io_uring_cmd", 00:22:55.392 "conserve_cpu": true, 00:22:55.392 "filename": "/dev/ng0n1", 00:22:55.392 "name": "xnvme_bdev" 00:22:55.392 }, 00:22:55.392 "method": "bdev_xnvme_create" 00:22:55.392 }, 00:22:55.392 { 00:22:55.392 "method": "bdev_wait_for_examine" 00:22:55.392 } 00:22:55.392 ] 00:22:55.392 } 00:22:55.392 ] 00:22:55.392 } 00:22:55.392 [2024-12-06 13:17:01.753830] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:55.392 [2024-12-06 13:17:01.753993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73618 ] 00:22:55.651 [2024-12-06 13:17:01.930277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.651 [2024-12-06 13:17:02.037122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.910 Running I/O for 5 seconds... 00:22:57.867 71680.00 IOPS, 280.00 MiB/s [2024-12-06T13:17:05.770Z] 70944.00 IOPS, 277.12 MiB/s [2024-12-06T13:17:06.703Z] 68778.67 IOPS, 268.67 MiB/s [2024-12-06T13:17:07.638Z] 67072.00 IOPS, 262.00 MiB/s 00:23:01.110 Latency(us) 00:23:01.110 [2024-12-06T13:17:07.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.110 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:23:01.110 xnvme_bdev : 5.00 67360.53 263.13 0.00 0.00 945.86 495.24 3157.64 00:23:01.110 [2024-12-06T13:17:07.638Z] =================================================================================================================== 00:23:01.110 [2024-12-06T13:17:07.638Z] Total : 67360.53 263.13 0.00 0.00 945.86 495.24 3157.64 00:23:02.045 13:17:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:02.045 13:17:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:23:02.045 13:17:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:02.045 13:17:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:02.045 13:17:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:02.045 { 00:23:02.045 "subsystems": [ 00:23:02.045 { 00:23:02.045 "subsystem": "bdev", 00:23:02.045 "config": [ 00:23:02.045 { 00:23:02.045 "params": { 00:23:02.045 "io_mechanism": "io_uring_cmd", 00:23:02.045 "conserve_cpu": true, 00:23:02.045 "filename": "/dev/ng0n1", 00:23:02.045 "name": "xnvme_bdev" 00:23:02.045 }, 00:23:02.045 "method": "bdev_xnvme_create" 00:23:02.045 }, 00:23:02.045 { 00:23:02.045 "method": "bdev_wait_for_examine" 00:23:02.045 } 00:23:02.045 ] 00:23:02.045 } 00:23:02.045 ] 00:23:02.045 } 00:23:02.045 [2024-12-06 13:17:08.497159] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:02.045 [2024-12-06 13:17:08.497360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73699 ] 00:23:02.304 [2024-12-06 13:17:08.680653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.304 [2024-12-06 13:17:08.791699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.906 Running I/O for 5 seconds... 00:23:04.773 34284.00 IOPS, 133.92 MiB/s [2024-12-06T13:17:12.235Z] 35915.50 IOPS, 140.29 MiB/s [2024-12-06T13:17:13.170Z] 36848.67 IOPS, 143.94 MiB/s [2024-12-06T13:17:14.544Z] 37604.50 IOPS, 146.89 MiB/s [2024-12-06T13:17:14.544Z] 37501.20 IOPS, 146.49 MiB/s 00:23:08.016 Latency(us) 00:23:08.016 [2024-12-06T13:17:14.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.016 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:23:08.016 xnvme_bdev : 5.00 37483.33 146.42 0.00 0.00 1702.62 62.84 17039.36 00:23:08.016 [2024-12-06T13:17:14.544Z] =================================================================================================================== 00:23:08.016 [2024-12-06T13:17:14.544Z] Total : 37483.33 146.42 0.00 0.00 1702.62 62.84 17039.36 00:23:08.966 00:23:08.966 real 0m27.062s 00:23:08.966 user 0m20.208s 00:23:08.966 sys 0m5.461s 00:23:08.966 13:17:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.966 13:17:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:08.966 ************************************ 00:23:08.966 END TEST xnvme_bdevperf 00:23:08.966 ************************************ 00:23:08.966 13:17:15 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:23:08.966 13:17:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:08.966 13:17:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:08.966 13:17:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:08.966 ************************************ 00:23:08.966 START TEST xnvme_fio_plugin 00:23:08.966 ************************************ 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:08.966 13:17:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:08.966 { 00:23:08.966 "subsystems": [ 00:23:08.966 { 00:23:08.966 "subsystem": "bdev", 00:23:08.966 "config": [ 00:23:08.966 { 00:23:08.966 "params": { 00:23:08.966 "io_mechanism": "io_uring_cmd", 00:23:08.966 "conserve_cpu": true, 00:23:08.966 "filename": "/dev/ng0n1", 00:23:08.966 "name": "xnvme_bdev" 00:23:08.966 }, 00:23:08.966 "method": "bdev_xnvme_create" 00:23:08.966 }, 00:23:08.966 { 00:23:08.966 "method": "bdev_wait_for_examine" 00:23:08.966 } 00:23:08.966 ] 00:23:08.966 } 00:23:08.966 ] 00:23:08.966 } 00:23:08.966 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:23:08.966 fio-3.35 00:23:08.966 Starting 1 thread 00:23:15.534 00:23:15.534 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73812: Fri Dec 6 13:17:21 2024 00:23:15.534 read: IOPS=49.4k, BW=193MiB/s (202MB/s)(966MiB/5002msec) 00:23:15.534 slat (usec): min=2, max=847, avg= 4.12, stdev= 2.38 00:23:15.534 clat (usec): min=753, max=4780, avg=1129.96, stdev=183.09 00:23:15.534 lat (usec): min=756, max=4788, avg=1134.08, stdev=183.55 00:23:15.534 clat percentiles (usec): 00:23:15.534 | 1.00th=[ 848], 5.00th=[ 906], 10.00th=[ 947], 20.00th=[ 996], 00:23:15.534 | 30.00th=[ 1029], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:23:15.534 | 70.00th=[ 1188], 80.00th=[ 1237], 90.00th=[ 1336], 95.00th=[ 1450], 00:23:15.534 | 99.00th=[ 1680], 99.50th=[ 1795], 99.90th=[ 2245], 99.95th=[ 3032], 00:23:15.534 | 99.99th=[ 4686] 00:23:15.534 bw ( KiB/s): min=186368, max=228352, per=100.00%, avg=198542.22, stdev=13118.04, samples=9 00:23:15.534 iops : min=46592, max=57088, avg=49635.56, stdev=3279.51, samples=9 00:23:15.534 lat (usec) : 1000=21.61% 00:23:15.534 lat (msec) : 2=78.19%, 4=0.18%, 10=0.03% 00:23:15.534 cpu : usr=75.82%, sys=21.16%, ctx=33, majf=0, minf=762 00:23:15.534 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:23:15.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.534 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:23:15.534 issued rwts: total=247220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:15.534 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:15.534 00:23:15.534 Run status group 0 (all jobs): 00:23:15.534 READ: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=966MiB (1013MB), run=5002-5002msec 00:23:16.102 ----------------------------------------------------- 00:23:16.102 Suppressions used: 00:23:16.102 count bytes template 00:23:16.102 1 11 /usr/src/fio/parse.c 00:23:16.102 1 8 libtcmalloc_minimal.so 00:23:16.102 1 904 libcrypto.so 00:23:16.102 ----------------------------------------------------- 00:23:16.102 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:16.102 13:17:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:16.102 { 00:23:16.102 "subsystems": [ 00:23:16.102 { 00:23:16.102 "subsystem": "bdev", 00:23:16.102 "config": [ 00:23:16.102 { 00:23:16.102 "params": { 00:23:16.102 "io_mechanism": "io_uring_cmd", 00:23:16.102 "conserve_cpu": true, 00:23:16.102 "filename": "/dev/ng0n1", 00:23:16.102 "name": "xnvme_bdev" 00:23:16.102 }, 00:23:16.102 "method": "bdev_xnvme_create" 00:23:16.102 }, 00:23:16.102 { 00:23:16.102 "method": "bdev_wait_for_examine" 00:23:16.102 } 00:23:16.102 ] 00:23:16.102 } 00:23:16.102 ] 00:23:16.102 } 00:23:16.361 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:23:16.361 fio-3.35 00:23:16.361 Starting 1 thread 00:23:22.939 00:23:22.939 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73904: Fri Dec 6 13:17:28 2024 00:23:22.939 write: IOPS=46.1k, BW=180MiB/s (189MB/s)(900MiB/5001msec); 0 zone resets 00:23:22.939 slat (usec): min=3, max=689, avg= 4.61, stdev= 5.37 00:23:22.939 clat (usec): min=94, max=9188, avg=1209.75, stdev=326.57 00:23:22.939 lat (usec): min=104, max=9200, avg=1214.36, stdev=327.39 00:23:22.939 clat percentiles (usec): 00:23:22.939 | 1.00th=[ 816], 5.00th=[ 906], 10.00th=[ 963], 20.00th=[ 1029], 00:23:22.939 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1156], 60.00th=[ 1205], 00:23:22.939 | 70.00th=[ 1254], 80.00th=[ 1336], 90.00th=[ 1500], 95.00th=[ 1663], 00:23:22.939 | 99.00th=[ 2212], 99.50th=[ 2900], 99.90th=[ 4555], 99.95th=[ 4948], 00:23:22.939 | 99.99th=[ 9110] 00:23:22.939 bw ( KiB/s): min=154608, max=203264, per=100.00%, avg=184269.00, stdev=18106.37, samples=10 00:23:22.939 iops : min=38652, max=50816, avg=46067.20, stdev=4526.55, samples=10 00:23:22.939 lat (usec) : 100=0.01%, 250=0.01%, 500=0.06%, 750=0.37%, 1000=14.68% 00:23:22.939 lat (msec) : 2=83.21%, 4=1.49%, 10=0.19% 00:23:22.939 cpu : usr=67.68%, sys=27.30%, ctx=64, majf=0, minf=763 00:23:22.939 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.2%, 16=24.6%, 32=50.9%, >=64=1.6% 00:23:22.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.939 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:23:22.939 issued rwts: total=0,230301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.939 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:22.939 00:23:22.939 Run status group 0 (all jobs): 00:23:22.939 WRITE: bw=180MiB/s (189MB/s), 180MiB/s-180MiB/s (189MB/s-189MB/s), io=900MiB (943MB), run=5001-5001msec 00:23:23.197 ----------------------------------------------------- 00:23:23.197 Suppressions used: 00:23:23.197 count bytes template 00:23:23.197 1 11 /usr/src/fio/parse.c 00:23:23.197 1 8 libtcmalloc_minimal.so 00:23:23.197 1 904 libcrypto.so 00:23:23.197 ----------------------------------------------------- 00:23:23.197 00:23:23.197 00:23:23.197 real 0m14.493s 00:23:23.197 user 0m10.762s 00:23:23.197 sys 0m3.041s 00:23:23.197 13:17:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.197 13:17:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:23:23.197 ************************************ 00:23:23.197 END TEST xnvme_fio_plugin 00:23:23.197 ************************************ 00:23:23.455 13:17:29 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73387 00:23:23.455 13:17:29 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73387 ']' 00:23:23.455 13:17:29 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73387 00:23:23.455 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73387) - No such process 00:23:23.455 Process with pid 73387 is not found 00:23:23.455 13:17:29 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73387 is not found' 00:23:23.455 00:23:23.455 real 3m45.537s 00:23:23.455 user 2m15.803s 00:23:23.455 sys 1m13.894s 00:23:23.455 13:17:29 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.455 13:17:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:23.455 ************************************ 00:23:23.455 END TEST nvme_xnvme 00:23:23.455 ************************************ 00:23:23.455 13:17:29 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:23:23.455 13:17:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.455 13:17:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.455 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:23.455 ************************************ 00:23:23.455 START TEST blockdev_xnvme 00:23:23.455 ************************************ 00:23:23.455 13:17:29 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:23:23.455 * Looking for test storage... 00:23:23.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:23:23.455 13:17:29 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:23.455 13:17:29 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:23:23.455 13:17:29 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:23.713 13:17:29 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.713 13:17:29 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:23:23.713 13:17:29 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.713 13:17:29 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.713 --rc genhtml_branch_coverage=1 00:23:23.713 --rc genhtml_function_coverage=1 00:23:23.713 --rc genhtml_legend=1 00:23:23.713 --rc geninfo_all_blocks=1 00:23:23.713 --rc geninfo_unexecuted_blocks=1 00:23:23.713 00:23:23.713 ' 00:23:23.713 13:17:29 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.713 --rc genhtml_branch_coverage=1 00:23:23.713 --rc genhtml_function_coverage=1 00:23:23.713 --rc genhtml_legend=1 00:23:23.713 --rc geninfo_all_blocks=1 00:23:23.713 --rc geninfo_unexecuted_blocks=1 00:23:23.713 00:23:23.713 ' 00:23:23.713 13:17:29 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.713 --rc genhtml_branch_coverage=1 00:23:23.713 --rc genhtml_function_coverage=1 00:23:23.713 --rc genhtml_legend=1 00:23:23.713 --rc geninfo_all_blocks=1 00:23:23.713 --rc geninfo_unexecuted_blocks=1 00:23:23.713 00:23:23.713 ' 00:23:23.713 13:17:29 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.713 --rc genhtml_branch_coverage=1 00:23:23.713 --rc genhtml_function_coverage=1 00:23:23.713 --rc genhtml_legend=1 00:23:23.713 --rc geninfo_all_blocks=1 00:23:23.713 --rc geninfo_unexecuted_blocks=1 00:23:23.713 00:23:23.713 ' 00:23:23.713 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:23.713 13:17:29 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:23:23.713 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:23:23.713 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:23.713 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:23:23.714 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:23:23.714 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:23:23.714 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:23:23.714 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:23:23.714 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:23:23.714 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:23:23.714 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:23:23.714 13:17:29 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74044 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:23.714 13:17:30 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74044 00:23:23.714 13:17:30 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74044 ']' 00:23:23.714 13:17:30 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.714 13:17:30 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.714 13:17:30 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.714 13:17:30 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.714 13:17:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:23.714 [2024-12-06 13:17:30.156322] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:23.714 [2024-12-06 13:17:30.156484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74044 ] 00:23:23.971 [2024-12-06 13:17:30.329497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.971 [2024-12-06 13:17:30.437391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.902 13:17:31 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.902 13:17:31 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:23:24.902 13:17:31 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:23:24.902 13:17:31 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:23:24.902 13:17:31 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:23:24.902 13:17:31 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:23:24.902 13:17:31 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:25.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:25.726 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:25.726 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:25.727 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:23:25.727 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:23:25.999 nvme0n1 00:23:25.999 nvme0n2 00:23:25.999 nvme0n3 00:23:25.999 nvme1n1 00:23:25.999 nvme2n1 00:23:25.999 nvme3n1 00:23:25.999 13:17:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.999 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "855ee8ee-7f44-49aa-a39e-d09bf0768d98"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "855ee8ee-7f44-49aa-a39e-d09bf0768d98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "56b43bd2-44d7-4997-8988-1c5215fbc3b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "56b43bd2-44d7-4997-8988-1c5215fbc3b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "a79f38db-e9de-402d-9b1b-da9ba00b843d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a79f38db-e9de-402d-9b1b-da9ba00b843d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "63f40e8a-e97e-474f-9515-4bff6cc55952"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "63f40e8a-e97e-474f-9515-4bff6cc55952",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "dee2dd3b-8f0b-4c4f-8751-110258876f19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "dee2dd3b-8f0b-4c4f-8751-110258876f19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "45f8ba24-46e4-47e3-803c-e823b315e365"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "45f8ba24-46e4-47e3-803c-e823b315e365",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:23:26.000 13:17:32 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74044 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74044 ']' 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74044 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.000 13:17:32 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74044 00:23:26.257 13:17:32 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.257 13:17:32 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.257 killing process with pid 74044 00:23:26.257 13:17:32 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74044' 00:23:26.257 13:17:32 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74044 00:23:26.257 13:17:32 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74044 00:23:28.156 13:17:34 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:28.156 13:17:34 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:23:28.156 13:17:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:28.156 13:17:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.156 13:17:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:28.156 ************************************ 00:23:28.156 START TEST bdev_hello_world 00:23:28.156 ************************************ 00:23:28.156 13:17:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:23:28.413 [2024-12-06 13:17:34.762221] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:28.413 [2024-12-06 13:17:34.762425] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74334 ] 00:23:28.671 [2024-12-06 13:17:34.942059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.671 [2024-12-06 13:17:35.130304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.237 [2024-12-06 13:17:35.543102] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:23:29.237 [2024-12-06 13:17:35.543183] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:23:29.237 [2024-12-06 13:17:35.543213] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:23:29.237 [2024-12-06 13:17:35.545609] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:23:29.237 [2024-12-06 13:17:35.545932] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:23:29.237 [2024-12-06 13:17:35.545971] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:23:29.237 [2024-12-06 13:17:35.546147] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:23:29.237 00:23:29.237 [2024-12-06 13:17:35.546192] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:30.170 00:23:30.170 real 0m1.861s 00:23:30.170 user 0m1.536s 00:23:30.170 sys 0m0.207s 00:23:30.170 13:17:36 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.170 13:17:36 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:30.170 ************************************ 00:23:30.170 END TEST bdev_hello_world 00:23:30.170 ************************************ 00:23:30.170 13:17:36 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:23:30.170 13:17:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:30.170 13:17:36 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.170 13:17:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:30.170 ************************************ 00:23:30.170 START TEST bdev_bounds 00:23:30.170 ************************************ 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:23:30.170 Process bdevio pid: 74369 00:23:30.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74369 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74369' 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74369 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74369 ']' 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.170 13:17:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:30.170 [2024-12-06 13:17:36.690000] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:30.170 [2024-12-06 13:17:36.690160] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74369 ] 00:23:30.428 [2024-12-06 13:17:36.864499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:30.686 [2024-12-06 13:17:36.971371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.686 [2024-12-06 13:17:36.971461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.686 [2024-12-06 13:17:36.971468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.619 13:17:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.619 13:17:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:23:31.619 13:17:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:31.619 I/O targets: 00:23:31.619 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:31.619 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:31.619 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:31.619 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:23:31.619 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:23:31.619 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:23:31.619 00:23:31.619 00:23:31.619 CUnit - A unit testing framework for C - Version 2.1-3 00:23:31.619 http://cunit.sourceforge.net/ 00:23:31.619 00:23:31.619 00:23:31.619 Suite: bdevio tests on: nvme3n1 00:23:31.619 Test: blockdev write read block ...passed 00:23:31.619 Test: blockdev write zeroes read block ...passed 00:23:31.619 Test: blockdev write zeroes read no split ...passed 00:23:31.619 Test: blockdev write zeroes read split ...passed 00:23:31.619 Test: blockdev write zeroes read split partial ...passed 00:23:31.619 Test: blockdev reset ...passed 00:23:31.619 Test: blockdev write read 8 blocks ...passed 00:23:31.619 Test: blockdev write read size > 128k ...passed 00:23:31.619 Test: blockdev write read invalid size ...passed 00:23:31.619 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:31.619 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:31.619 Test: blockdev write read max offset ...passed 00:23:31.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:31.619 Test: blockdev writev readv 8 blocks ...passed 00:23:31.619 Test: blockdev writev readv 30 x 1block ...passed 00:23:31.619 Test: blockdev writev readv block ...passed 00:23:31.619 Test: blockdev writev readv size > 128k ...passed 00:23:31.619 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:31.619 Test: blockdev comparev and writev ...passed 00:23:31.619 Test: blockdev nvme passthru rw ...passed 00:23:31.619 Test: blockdev nvme passthru vendor specific ...passed 00:23:31.619 Test: blockdev nvme admin passthru ...passed 00:23:31.619 Test: blockdev copy ...passed 00:23:31.619 Suite: bdevio tests on: nvme2n1 00:23:31.619 Test: blockdev write read block ...passed 00:23:31.619 Test: blockdev write zeroes read block ...passed 00:23:31.619 Test: blockdev write zeroes read no split ...passed 00:23:31.619 Test: blockdev write zeroes read split ...passed 00:23:31.877 Test: blockdev write zeroes read split partial ...passed 00:23:31.877 Test: blockdev reset ...passed 00:23:31.877 Test: blockdev write read 8 blocks ...passed 00:23:31.877 Test: blockdev write read size > 128k ...passed 00:23:31.877 Test: blockdev write read invalid size ...passed 00:23:31.877 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:31.877 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:31.877 Test: blockdev write read max offset ...passed 00:23:31.877 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:31.877 Test: blockdev writev readv 8 blocks ...passed 00:23:31.877 Test: blockdev writev readv 30 x 1block ...passed 00:23:31.877 Test: blockdev writev readv block ...passed 00:23:31.877 Test: blockdev writev readv size > 128k ...passed 00:23:31.877 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:31.877 Test: blockdev comparev and writev ...passed 00:23:31.877 Test: blockdev nvme passthru rw ...passed 00:23:31.877 Test: blockdev nvme passthru vendor specific ...passed 00:23:31.877 Test: blockdev nvme admin passthru ...passed 00:23:31.877 Test: blockdev copy ...passed 00:23:31.877 Suite: bdevio tests on: nvme1n1 00:23:31.877 Test: blockdev write read block ...passed 00:23:31.877 Test: blockdev write zeroes read block ...passed 00:23:31.877 Test: blockdev write zeroes read no split ...passed 00:23:31.877 Test: blockdev write zeroes read split ...passed 00:23:31.877 Test: blockdev write zeroes read split partial ...passed 00:23:31.877 Test: blockdev reset ...passed 00:23:31.877 Test: blockdev write read 8 blocks ...passed 00:23:31.877 Test: blockdev write read size > 128k ...passed 00:23:31.877 Test: blockdev write read invalid size ...passed 00:23:31.877 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:31.877 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:31.877 Test: blockdev write read max offset ...passed 00:23:31.877 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:31.877 Test: blockdev writev readv 8 blocks ...passed 00:23:31.877 Test: blockdev writev readv 30 x 1block ...passed 00:23:31.877 Test: blockdev writev readv block ...passed 00:23:31.877 Test: blockdev writev readv size > 128k ...passed 00:23:31.877 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:31.877 Test: blockdev comparev and writev ...passed 00:23:31.877 Test: blockdev nvme passthru rw ...passed 00:23:31.877 Test: blockdev nvme passthru vendor specific ...passed 00:23:31.877 Test: blockdev nvme admin passthru ...passed 00:23:31.877 Test: blockdev copy ...passed 00:23:31.877 Suite: bdevio tests on: nvme0n3 00:23:31.877 Test: blockdev write read block ...passed 00:23:31.877 Test: blockdev write zeroes read block ...passed 00:23:31.877 Test: blockdev write zeroes read no split ...passed 00:23:31.877 Test: blockdev write zeroes read split ...passed 00:23:31.877 Test: blockdev write zeroes read split partial ...passed 00:23:31.877 Test: blockdev reset ...passed 00:23:31.877 Test: blockdev write read 8 blocks ...passed 00:23:31.877 Test: blockdev write read size > 128k ...passed 00:23:31.877 Test: blockdev write read invalid size ...passed 00:23:31.878 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:31.878 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:31.878 Test: blockdev write read max offset ...passed 00:23:31.878 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:31.878 Test: blockdev writev readv 8 blocks ...passed 00:23:31.878 Test: blockdev writev readv 30 x 1block ...passed 00:23:31.878 Test: blockdev writev readv block ...passed 00:23:31.878 Test: blockdev writev readv size > 128k ...passed 00:23:31.878 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:31.878 Test: blockdev comparev and writev ...passed 00:23:31.878 Test: blockdev nvme passthru rw ...passed 00:23:31.878 Test: blockdev nvme passthru vendor specific ...passed 00:23:31.878 Test: blockdev nvme admin passthru ...passed 00:23:31.878 Test: blockdev copy ...passed 00:23:31.878 Suite: bdevio tests on: nvme0n2 00:23:31.878 Test: blockdev write read block ...passed 00:23:31.878 Test: blockdev write zeroes read block ...passed 00:23:31.878 Test: blockdev write zeroes read no split ...passed 00:23:31.878 Test: blockdev write zeroes read split ...passed 00:23:32.136 Test: blockdev write zeroes read split partial ...passed 00:23:32.136 Test: blockdev reset ...passed 00:23:32.136 Test: blockdev write read 8 blocks ...passed 00:23:32.136 Test: blockdev write read size > 128k ...passed 00:23:32.136 Test: blockdev write read invalid size ...passed 00:23:32.136 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:32.136 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:32.136 Test: blockdev write read max offset ...passed 00:23:32.136 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:32.136 Test: blockdev writev readv 8 blocks ...passed 00:23:32.136 Test: blockdev writev readv 30 x 1block ...passed 00:23:32.136 Test: blockdev writev readv block ...passed 00:23:32.136 Test: blockdev writev readv size > 128k ...passed 00:23:32.136 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:32.136 Test: blockdev comparev and writev ...passed 00:23:32.136 Test: blockdev nvme passthru rw ...passed 00:23:32.136 Test: blockdev nvme passthru vendor specific ...passed 00:23:32.136 Test: blockdev nvme admin passthru ...passed 00:23:32.136 Test: blockdev copy ...passed 00:23:32.136 Suite: bdevio tests on: nvme0n1 00:23:32.136 Test: blockdev write read block ...passed 00:23:32.136 Test: blockdev write zeroes read block ...passed 00:23:32.136 Test: blockdev write zeroes read no split ...passed 00:23:32.136 Test: blockdev write zeroes read split ...passed 00:23:32.136 Test: blockdev write zeroes read split partial ...passed 00:23:32.136 Test: blockdev reset ...passed 00:23:32.136 Test: blockdev write read 8 blocks ...passed 00:23:32.136 Test: blockdev write read size > 128k ...passed 00:23:32.136 Test: blockdev write read invalid size ...passed 00:23:32.136 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:32.136 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:32.136 Test: blockdev write read max offset ...passed 00:23:32.136 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:32.136 Test: blockdev writev readv 8 blocks ...passed 00:23:32.136 Test: blockdev writev readv 30 x 1block ...passed 00:23:32.136 Test: blockdev writev readv block ...passed 00:23:32.136 Test: blockdev writev readv size > 128k ...passed 00:23:32.136 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:32.136 Test: blockdev comparev and writev ...passed 00:23:32.136 Test: blockdev nvme passthru rw ...passed 00:23:32.136 Test: blockdev nvme passthru vendor specific ...passed 00:23:32.136 Test: blockdev nvme admin passthru ...passed 00:23:32.136 Test: blockdev copy ...passed 00:23:32.136 00:23:32.136 Run Summary: Type Total Ran Passed Failed Inactive 00:23:32.136 suites 6 6 n/a 0 0 00:23:32.136 tests 138 138 138 0 0 00:23:32.136 asserts 780 780 780 0 n/a 00:23:32.136 00:23:32.136 Elapsed time = 1.248 seconds 00:23:32.136 0 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74369 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74369 ']' 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74369 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74369 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74369' 00:23:32.136 killing process with pid 74369 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74369 00:23:32.136 13:17:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74369 00:23:33.511 ************************************ 00:23:33.511 END TEST bdev_bounds 00:23:33.511 ************************************ 00:23:33.511 13:17:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:33.511 00:23:33.511 real 0m3.067s 00:23:33.511 user 0m8.164s 00:23:33.511 sys 0m0.385s 00:23:33.511 13:17:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.511 13:17:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:33.511 13:17:39 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:23:33.511 13:17:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:33.511 13:17:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.511 13:17:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:33.511 ************************************ 00:23:33.511 START TEST bdev_nbd 00:23:33.511 ************************************ 00:23:33.511 13:17:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:23:33.511 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:33.511 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:33.511 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74436 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74436 /var/tmp/spdk-nbd.sock 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74436 ']' 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:33.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.512 13:17:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:33.512 [2024-12-06 13:17:39.823327] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:33.512 [2024-12-06 13:17:39.823801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.512 [2024-12-06 13:17:40.024308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.783 [2024-12-06 13:17:40.169476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:34.351 13:17:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:34.609 1+0 records in 00:23:34.609 1+0 records out 00:23:34.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663678 s, 6.2 MB/s 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:34.609 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.176 1+0 records in 00:23:35.176 1+0 records out 00:23:35.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510015 s, 8.0 MB/s 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:35.176 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.435 1+0 records in 00:23:35.435 1+0 records out 00:23:35.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489078 s, 8.4 MB/s 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:35.435 13:17:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.694 1+0 records in 00:23:35.694 1+0 records out 00:23:35.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704829 s, 5.8 MB/s 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:35.694 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:23:35.953 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:23:35.953 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:23:35.953 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:23:35.953 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:23:35.953 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:35.953 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.953 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.953 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:36.212 1+0 records in 00:23:36.212 1+0 records out 00:23:36.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574217 s, 7.1 MB/s 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:36.212 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:36.470 1+0 records in 00:23:36.470 1+0 records out 00:23:36.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603433 s, 6.8 MB/s 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:36.470 13:17:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:36.728 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:36.728 { 00:23:36.728 "nbd_device": "/dev/nbd0", 00:23:36.728 "bdev_name": "nvme0n1" 00:23:36.728 }, 00:23:36.728 { 00:23:36.728 "nbd_device": "/dev/nbd1", 00:23:36.728 "bdev_name": "nvme0n2" 00:23:36.728 }, 00:23:36.728 { 00:23:36.728 "nbd_device": "/dev/nbd2", 00:23:36.728 "bdev_name": "nvme0n3" 00:23:36.728 }, 00:23:36.728 { 00:23:36.728 "nbd_device": "/dev/nbd3", 00:23:36.728 "bdev_name": "nvme1n1" 00:23:36.728 }, 00:23:36.728 { 00:23:36.728 "nbd_device": "/dev/nbd4", 00:23:36.728 "bdev_name": "nvme2n1" 00:23:36.728 }, 00:23:36.728 { 00:23:36.728 "nbd_device": "/dev/nbd5", 00:23:36.728 "bdev_name": "nvme3n1" 00:23:36.728 } 00:23:36.728 ]' 00:23:36.728 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:36.729 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:36.729 { 00:23:36.729 "nbd_device": "/dev/nbd0", 00:23:36.729 "bdev_name": "nvme0n1" 00:23:36.729 }, 00:23:36.729 { 00:23:36.729 "nbd_device": "/dev/nbd1", 00:23:36.729 "bdev_name": "nvme0n2" 00:23:36.729 }, 00:23:36.729 { 00:23:36.729 "nbd_device": "/dev/nbd2", 00:23:36.729 "bdev_name": "nvme0n3" 00:23:36.729 }, 00:23:36.729 { 00:23:36.729 "nbd_device": "/dev/nbd3", 00:23:36.729 "bdev_name": "nvme1n1" 00:23:36.729 }, 00:23:36.729 { 00:23:36.729 "nbd_device": "/dev/nbd4", 00:23:36.729 "bdev_name": "nvme2n1" 00:23:36.729 }, 00:23:36.729 { 00:23:36.729 "nbd_device": "/dev/nbd5", 00:23:36.729 "bdev_name": "nvme3n1" 00:23:36.729 } 00:23:36.729 ]' 00:23:36.729 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:36.729 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:23:36.729 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:36.729 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:23:36.729 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:36.729 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:36.729 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:36.729 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:36.987 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:36.987 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:36.987 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:36.987 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:36.987 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.987 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:36.987 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:36.987 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:36.987 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:36.987 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:37.245 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:37.245 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:37.245 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:37.245 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.245 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.245 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:37.245 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:37.245 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.245 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.245 13:17:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:23:37.812 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:23:37.812 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:23:37.812 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:23:37.812 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.812 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.812 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:23:37.812 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:37.812 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.812 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.812 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:23:38.071 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:23:38.071 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:23:38.071 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:23:38.071 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:38.071 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:38.071 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:23:38.071 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:38.071 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:38.071 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:38.071 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:23:38.329 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:23:38.329 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:23:38.329 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:23:38.329 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:38.329 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:38.329 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:23:38.329 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:38.329 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:38.329 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:38.329 13:17:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:38.587 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:39.153 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:23:39.410 /dev/nbd0 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.410 1+0 records in 00:23:39.410 1+0 records out 00:23:39.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508681 s, 8.1 MB/s 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:39.410 13:17:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:23:39.667 /dev/nbd1 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.667 1+0 records in 00:23:39.667 1+0 records out 00:23:39.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452992 s, 9.0 MB/s 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:39.667 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:23:40.233 /dev/nbd10 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:40.233 1+0 records in 00:23:40.233 1+0 records out 00:23:40.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662078 s, 6.2 MB/s 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:40.233 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:23:40.493 /dev/nbd11 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:40.493 1+0 records in 00:23:40.493 1+0 records out 00:23:40.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692345 s, 5.9 MB/s 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:40.493 13:17:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:23:40.753 /dev/nbd12 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:41.011 1+0 records in 00:23:41.011 1+0 records out 00:23:41.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543048 s, 7.5 MB/s 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:41.011 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:23:41.269 /dev/nbd13 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:41.269 1+0 records in 00:23:41.269 1+0 records out 00:23:41.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673335 s, 6.1 MB/s 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:41.269 13:17:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:41.525 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:41.525 { 00:23:41.525 "nbd_device": "/dev/nbd0", 00:23:41.525 "bdev_name": "nvme0n1" 00:23:41.525 }, 00:23:41.525 { 00:23:41.525 "nbd_device": "/dev/nbd1", 00:23:41.526 "bdev_name": "nvme0n2" 00:23:41.526 }, 00:23:41.526 { 00:23:41.526 "nbd_device": "/dev/nbd10", 00:23:41.526 "bdev_name": "nvme0n3" 00:23:41.526 }, 00:23:41.526 { 00:23:41.526 "nbd_device": "/dev/nbd11", 00:23:41.526 "bdev_name": "nvme1n1" 00:23:41.526 }, 00:23:41.526 { 00:23:41.526 "nbd_device": "/dev/nbd12", 00:23:41.526 "bdev_name": "nvme2n1" 00:23:41.526 }, 00:23:41.526 { 00:23:41.526 "nbd_device": "/dev/nbd13", 00:23:41.526 "bdev_name": "nvme3n1" 00:23:41.526 } 00:23:41.526 ]' 00:23:41.526 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:41.526 { 00:23:41.526 "nbd_device": "/dev/nbd0", 00:23:41.526 "bdev_name": "nvme0n1" 00:23:41.526 }, 00:23:41.526 { 00:23:41.526 "nbd_device": "/dev/nbd1", 00:23:41.526 "bdev_name": "nvme0n2" 00:23:41.526 }, 00:23:41.526 { 00:23:41.526 "nbd_device": "/dev/nbd10", 00:23:41.526 "bdev_name": "nvme0n3" 00:23:41.526 }, 00:23:41.526 { 00:23:41.526 "nbd_device": "/dev/nbd11", 00:23:41.526 "bdev_name": "nvme1n1" 00:23:41.526 }, 00:23:41.526 { 00:23:41.526 "nbd_device": "/dev/nbd12", 00:23:41.526 "bdev_name": "nvme2n1" 00:23:41.526 }, 00:23:41.526 { 00:23:41.526 "nbd_device": "/dev/nbd13", 00:23:41.526 "bdev_name": "nvme3n1" 00:23:41.526 } 00:23:41.526 ]' 00:23:41.526 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:41.783 /dev/nbd1 00:23:41.783 /dev/nbd10 00:23:41.783 /dev/nbd11 00:23:41.783 /dev/nbd12 00:23:41.783 /dev/nbd13' 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:41.783 /dev/nbd1 00:23:41.783 /dev/nbd10 00:23:41.783 /dev/nbd11 00:23:41.783 /dev/nbd12 00:23:41.783 /dev/nbd13' 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:41.783 256+0 records in 00:23:41.783 256+0 records out 00:23:41.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00858325 s, 122 MB/s 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:41.783 256+0 records in 00:23:41.783 256+0 records out 00:23:41.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115045 s, 9.1 MB/s 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:41.783 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:42.040 256+0 records in 00:23:42.040 256+0 records out 00:23:42.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12355 s, 8.5 MB/s 00:23:42.040 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:42.040 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:23:42.040 256+0 records in 00:23:42.040 256+0 records out 00:23:42.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119668 s, 8.8 MB/s 00:23:42.040 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:42.040 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:23:42.297 256+0 records in 00:23:42.297 256+0 records out 00:23:42.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129681 s, 8.1 MB/s 00:23:42.297 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:42.297 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:23:42.297 256+0 records in 00:23:42.297 256+0 records out 00:23:42.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116017 s, 9.0 MB/s 00:23:42.297 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:42.297 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:23:42.555 256+0 records in 00:23:42.555 256+0 records out 00:23:42.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116658 s, 9.0 MB/s 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:42.555 13:17:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:42.812 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:42.812 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:42.813 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:42.813 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:42.813 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:42.813 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:42.813 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:42.813 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:42.813 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:42.813 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:43.070 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:43.070 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:43.070 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:43.070 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:43.070 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:43.070 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:43.070 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:43.070 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:43.070 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:43.070 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:23:43.327 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:23:43.327 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:23:43.327 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:23:43.327 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:43.327 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:43.327 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:23:43.327 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:43.327 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:43.327 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:43.327 13:17:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:23:43.890 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:23:43.890 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:23:43.890 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:23:43.890 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:43.890 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:43.890 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:23:43.890 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:43.890 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:43.890 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:43.890 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:23:44.147 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:23:44.147 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:23:44.147 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:23:44.147 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:44.147 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:44.147 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:23:44.147 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:44.147 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:44.147 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:44.147 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:23:44.407 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:23:44.407 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:23:44.407 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:23:44.408 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:44.408 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:44.408 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:23:44.408 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:44.408 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:44.408 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:44.408 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:44.408 13:17:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:44.665 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:44.923 malloc_lvol_verify 00:23:44.923 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:45.260 db948328-01d2-4f5b-986c-1a4b4888de5d 00:23:45.260 13:17:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:45.825 895421a1-d60b-4bb0-b7bc-625ca01568a7 00:23:45.825 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:46.082 /dev/nbd0 00:23:46.082 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:46.082 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:46.082 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:46.082 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:46.082 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:46.082 mke2fs 1.47.0 (5-Feb-2023) 00:23:46.082 Discarding device blocks: 0/4096 done 00:23:46.082 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:46.082 00:23:46.082 Allocating group tables: 0/1 done 00:23:46.082 Writing inode tables: 0/1 done 00:23:46.083 Creating journal (1024 blocks): done 00:23:46.083 Writing superblocks and filesystem accounting information: 0/1 done 00:23:46.083 00:23:46.083 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:46.083 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:46.083 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:46.083 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:46.083 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:46.083 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:46.083 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74436 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74436 ']' 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74436 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74436 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74436' 00:23:46.341 killing process with pid 74436 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74436 00:23:46.341 13:17:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74436 00:23:47.714 ************************************ 00:23:47.714 END TEST bdev_nbd 00:23:47.714 ************************************ 00:23:47.714 13:17:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:47.714 00:23:47.714 real 0m14.155s 00:23:47.714 user 0m20.695s 00:23:47.714 sys 0m4.516s 00:23:47.714 13:17:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.714 13:17:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:47.714 13:17:53 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:23:47.714 13:17:53 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:23:47.714 13:17:53 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:23:47.714 13:17:53 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:23:47.714 13:17:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.714 13:17:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.714 13:17:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:47.714 ************************************ 00:23:47.714 START TEST bdev_fio 00:23:47.714 ************************************ 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:47.714 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:47.714 ************************************ 00:23:47.714 START TEST bdev_fio_rw_verify 00:23:47.714 ************************************ 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:47.714 13:17:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:47.714 13:17:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:47.714 13:17:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:47.714 13:17:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:47.714 13:17:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:47.714 13:17:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:47.714 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:47.714 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:47.714 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:47.714 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:47.714 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:47.714 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:47.714 fio-3.35 00:23:47.714 Starting 6 threads 00:23:59.908 00:23:59.908 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74884: Fri Dec 6 13:18:06 2024 00:23:59.908 read: IOPS=26.9k, BW=105MiB/s (110MB/s)(1051MiB/10001msec) 00:23:59.908 slat (usec): min=3, max=1792, avg= 7.48, stdev= 5.76 00:23:59.908 clat (usec): min=105, max=19950, avg=689.19, stdev=391.26 00:23:59.908 lat (usec): min=111, max=19963, avg=696.68, stdev=391.72 00:23:59.908 clat percentiles (usec): 00:23:59.908 | 50.000th=[ 693], 99.000th=[ 1336], 99.900th=[ 3818], 99.990th=[19268], 00:23:59.908 | 99.999th=[20055] 00:23:59.908 write: IOPS=27.1k, BW=106MiB/s (111MB/s)(1057MiB/10001msec); 0 zone resets 00:23:59.908 slat (usec): min=14, max=3317, avg=29.66, stdev=28.68 00:23:59.908 clat (usec): min=106, max=20009, avg=769.25, stdev=416.29 00:23:59.908 lat (usec): min=131, max=20047, avg=798.92, stdev=418.19 00:23:59.908 clat percentiles (usec): 00:23:59.908 | 50.000th=[ 766], 99.000th=[ 1532], 99.900th=[ 3851], 99.990th=[19530], 00:23:59.908 | 99.999th=[20055] 00:23:59.908 bw ( KiB/s): min=92880, max=132315, per=99.81%, avg=108060.16, stdev=1844.75, samples=114 00:23:59.908 iops : min=23220, max=33078, avg=27014.74, stdev=461.16, samples=114 00:23:59.908 lat (usec) : 250=1.97%, 500=17.79%, 750=34.14%, 1000=35.71% 00:23:59.908 lat (msec) : 2=10.00%, 4=0.31%, 10=0.06%, 20=0.03%, 50=0.01% 00:23:59.908 cpu : usr=60.16%, sys=26.30%, ctx=7014, majf=0, minf=23145 00:23:59.908 IO depths : 1=12.2%, 2=24.8%, 4=50.2%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.908 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.908 issued rwts: total=268969,270692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.908 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:59.908 00:23:59.908 Run status group 0 (all jobs): 00:23:59.908 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=1051MiB (1102MB), run=10001-10001msec 00:23:59.908 WRITE: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1057MiB (1109MB), run=10001-10001msec 00:24:01.287 ----------------------------------------------------- 00:24:01.287 Suppressions used: 00:24:01.287 count bytes template 00:24:01.287 6 48 /usr/src/fio/parse.c 00:24:01.287 1531 146976 /usr/src/fio/iolog.c 00:24:01.287 1 8 libtcmalloc_minimal.so 00:24:01.287 1 904 libcrypto.so 00:24:01.287 ----------------------------------------------------- 00:24:01.287 00:24:01.287 00:24:01.287 real 0m13.498s 00:24:01.287 user 0m38.026s 00:24:01.287 sys 0m16.132s 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:24:01.287 ************************************ 00:24:01.287 END TEST bdev_fio_rw_verify 00:24:01.287 ************************************ 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:24:01.287 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "855ee8ee-7f44-49aa-a39e-d09bf0768d98"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "855ee8ee-7f44-49aa-a39e-d09bf0768d98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "56b43bd2-44d7-4997-8988-1c5215fbc3b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "56b43bd2-44d7-4997-8988-1c5215fbc3b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "a79f38db-e9de-402d-9b1b-da9ba00b843d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a79f38db-e9de-402d-9b1b-da9ba00b843d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "63f40e8a-e97e-474f-9515-4bff6cc55952"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "63f40e8a-e97e-474f-9515-4bff6cc55952",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "dee2dd3b-8f0b-4c4f-8751-110258876f19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "dee2dd3b-8f0b-4c4f-8751-110258876f19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "45f8ba24-46e4-47e3-803c-e823b315e365"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "45f8ba24-46e4-47e3-803c-e823b315e365",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:01.288 /home/vagrant/spdk_repo/spdk 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:24:01.288 00:24:01.288 real 0m13.685s 00:24:01.288 user 0m38.123s 00:24:01.288 sys 0m16.212s 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.288 13:18:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:01.288 ************************************ 00:24:01.288 END TEST bdev_fio 00:24:01.288 ************************************ 00:24:01.288 13:18:07 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:01.288 13:18:07 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:01.288 13:18:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:01.288 13:18:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.288 13:18:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:01.288 ************************************ 00:24:01.288 START TEST bdev_verify 00:24:01.288 ************************************ 00:24:01.288 13:18:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:01.288 [2024-12-06 13:18:07.723061] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:01.288 [2024-12-06 13:18:07.723235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75058 ] 00:24:01.553 [2024-12-06 13:18:07.910693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:01.553 [2024-12-06 13:18:08.037923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.553 [2024-12-06 13:18:08.037932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.136 Running I/O for 5 seconds... 00:24:04.450 24320.00 IOPS, 95.00 MiB/s [2024-12-06T13:18:11.914Z] 23520.00 IOPS, 91.88 MiB/s [2024-12-06T13:18:12.848Z] 24021.33 IOPS, 93.83 MiB/s [2024-12-06T13:18:13.782Z] 24400.00 IOPS, 95.31 MiB/s [2024-12-06T13:18:13.782Z] 24179.20 IOPS, 94.45 MiB/s 00:24:07.254 Latency(us) 00:24:07.254 [2024-12-06T13:18:13.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.254 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0x0 length 0x80000 00:24:07.254 nvme0n1 : 5.06 1820.13 7.11 0.00 0.00 70192.33 10187.87 79119.83 00:24:07.254 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0x80000 length 0x80000 00:24:07.254 nvme0n1 : 5.03 1830.94 7.15 0.00 0.00 69779.21 10187.87 72447.07 00:24:07.254 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0x0 length 0x80000 00:24:07.254 nvme0n2 : 5.04 1802.54 7.04 0.00 0.00 70743.40 15609.48 61961.31 00:24:07.254 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0x80000 length 0x80000 00:24:07.254 nvme0n2 : 5.04 1830.34 7.15 0.00 0.00 69674.20 13405.09 66250.94 00:24:07.254 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0x0 length 0x80000 00:24:07.254 nvme0n3 : 5.04 1801.96 7.04 0.00 0.00 70633.78 10068.71 71017.19 00:24:07.254 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0x80000 length 0x80000 00:24:07.254 nvme0n3 : 5.04 1829.72 7.15 0.00 0.00 69575.93 8519.68 70063.94 00:24:07.254 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0x0 length 0xbd0bd 00:24:07.254 nvme1n1 : 5.06 2821.40 11.02 0.00 0.00 44994.48 5004.57 59339.87 00:24:07.254 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:24:07.254 nvme1n1 : 5.06 2881.56 11.26 0.00 0.00 44072.72 4379.00 62437.93 00:24:07.254 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0x0 length 0xa0000 00:24:07.254 nvme2n1 : 5.06 1821.31 7.11 0.00 0.00 69521.89 7804.74 77689.95 00:24:07.254 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0xa0000 length 0xa0000 00:24:07.254 nvme2n1 : 5.07 1844.33 7.20 0.00 0.00 68649.16 9413.35 70540.57 00:24:07.254 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0x0 length 0x20000 00:24:07.254 nvme3n1 : 5.06 1820.73 7.11 0.00 0.00 69422.65 6732.33 69587.32 00:24:07.254 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:07.254 Verification LBA range: start 0x20000 length 0x20000 00:24:07.254 nvme3n1 : 5.07 1842.07 7.20 0.00 0.00 68599.71 4319.42 72923.69 00:24:07.254 [2024-12-06T13:18:13.782Z] =================================================================================================================== 00:24:07.254 [2024-12-06T13:18:13.782Z] Total : 23947.03 93.54 0.00 0.00 63676.46 4319.42 79119.83 00:24:08.188 00:24:08.188 real 0m7.004s 00:24:08.188 user 0m11.016s 00:24:08.188 sys 0m1.780s 00:24:08.188 13:18:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.188 13:18:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:24:08.188 ************************************ 00:24:08.188 END TEST bdev_verify 00:24:08.188 ************************************ 00:24:08.188 13:18:14 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:08.188 13:18:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:08.188 13:18:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.188 13:18:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:08.188 ************************************ 00:24:08.188 START TEST bdev_verify_big_io 00:24:08.188 ************************************ 00:24:08.188 13:18:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:08.447 [2024-12-06 13:18:14.767900] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:08.447 [2024-12-06 13:18:14.768053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75158 ] 00:24:08.705 [2024-12-06 13:18:14.994537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:08.705 [2024-12-06 13:18:15.108695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.705 [2024-12-06 13:18:15.108706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.269 Running I/O for 5 seconds... 00:24:15.134 584.00 IOPS, 36.50 MiB/s [2024-12-06T13:18:21.920Z] 2964.00 IOPS, 185.25 MiB/s 00:24:15.392 Latency(us) 00:24:15.392 [2024-12-06T13:18:21.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.392 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:15.392 Verification LBA range: start 0x0 length 0x8000 00:24:15.392 nvme0n1 : 6.03 118.09 7.38 0.00 0.00 1039294.19 22997.18 1357429.29 00:24:15.392 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:15.392 Verification LBA range: start 0x8000 length 0x8000 00:24:15.392 nvme0n1 : 5.93 151.18 9.45 0.00 0.00 817950.45 133455.13 1288795.23 00:24:15.392 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:15.392 Verification LBA range: start 0x0 length 0x8000 00:24:15.392 nvme0n2 : 6.01 130.49 8.16 0.00 0.00 913760.91 145847.39 953250.91 00:24:15.392 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:15.392 Verification LBA range: start 0x8000 length 0x8000 00:24:15.392 nvme0n2 : 5.82 123.65 7.73 0.00 0.00 980640.99 91512.09 1738729.66 00:24:15.392 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:15.392 Verification LBA range: start 0x0 length 0x8000 00:24:15.392 nvme0n3 : 6.05 97.89 6.12 0.00 0.00 1208404.13 35031.97 2287802.18 00:24:15.392 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:15.392 Verification LBA range: start 0x8000 length 0x8000 00:24:15.392 nvme0n3 : 5.93 105.25 6.58 0.00 0.00 1102580.57 137268.13 1998013.91 00:24:15.392 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:15.393 Verification LBA range: start 0x0 length 0xbd0b 00:24:15.393 nvme1n1 : 6.03 151.14 9.45 0.00 0.00 759458.17 13345.51 937998.89 00:24:15.393 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:15.393 Verification LBA range: start 0xbd0b length 0xbd0b 00:24:15.393 nvme1n1 : 5.83 126.27 7.89 0.00 0.00 906868.46 6345.08 1342177.28 00:24:15.393 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:15.393 Verification LBA range: start 0x0 length 0xa000 00:24:15.393 nvme2n1 : 6.04 148.39 9.27 0.00 0.00 750569.19 13285.93 892242.85 00:24:15.393 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:15.393 Verification LBA range: start 0xa000 length 0xa000 00:24:15.393 nvme2n1 : 6.01 154.32 9.65 0.00 0.00 709737.93 45517.73 1098145.05 00:24:15.393 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:15.393 Verification LBA range: start 0x0 length 0x2000 00:24:15.393 nvme3n1 : 6.04 95.32 5.96 0.00 0.00 1134269.18 13941.29 2806370.68 00:24:15.393 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:15.393 Verification LBA range: start 0x2000 length 0x2000 00:24:15.393 nvme3n1 : 6.02 109.01 6.81 0.00 0.00 992198.79 3798.11 3233427.08 00:24:15.393 [2024-12-06T13:18:21.921Z] =================================================================================================================== 00:24:15.393 [2024-12-06T13:18:21.921Z] Total : 1510.99 94.44 0.00 0.00 918306.68 3798.11 3233427.08 00:24:16.767 00:24:16.767 real 0m8.289s 00:24:16.767 user 0m15.122s 00:24:16.767 sys 0m0.461s 00:24:16.767 13:18:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.767 13:18:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:24:16.767 ************************************ 00:24:16.767 END TEST bdev_verify_big_io 00:24:16.767 ************************************ 00:24:16.767 13:18:22 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:16.767 13:18:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:16.767 13:18:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.767 13:18:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:16.767 ************************************ 00:24:16.767 START TEST bdev_write_zeroes 00:24:16.767 ************************************ 00:24:16.767 13:18:23 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:16.767 [2024-12-06 13:18:23.109870] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:16.767 [2024-12-06 13:18:23.110044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75269 ] 00:24:16.767 [2024-12-06 13:18:23.291816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.026 [2024-12-06 13:18:23.397645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.612 Running I/O for 1 seconds... 00:24:18.554 63136.00 IOPS, 246.62 MiB/s 00:24:18.554 Latency(us) 00:24:18.554 [2024-12-06T13:18:25.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.554 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:18.554 nvme0n1 : 1.02 9548.16 37.30 0.00 0.00 13390.21 7030.23 23712.12 00:24:18.554 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:18.554 nvme0n2 : 1.03 9603.29 37.51 0.00 0.00 13303.88 7149.38 20971.52 00:24:18.554 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:18.554 nvme0n3 : 1.02 9531.86 37.23 0.00 0.00 13391.68 7089.80 23950.43 00:24:18.554 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:18.554 nvme1n1 : 1.03 14915.67 58.26 0.00 0.00 8529.18 4468.36 24069.59 00:24:18.554 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:18.554 nvme2n1 : 1.02 9508.44 37.14 0.00 0.00 13334.49 7447.27 23950.43 00:24:18.554 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:18.554 nvme3n1 : 1.02 9492.86 37.08 0.00 0.00 13345.83 7506.85 24427.05 00:24:18.554 [2024-12-06T13:18:25.082Z] =================================================================================================================== 00:24:18.554 [2024-12-06T13:18:25.082Z] Total : 62600.29 244.53 0.00 0.00 12199.49 4468.36 24427.05 00:24:19.485 00:24:19.485 real 0m2.926s 00:24:19.485 user 0m2.177s 00:24:19.485 sys 0m0.552s 00:24:19.485 13:18:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.485 13:18:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:24:19.485 ************************************ 00:24:19.485 END TEST bdev_write_zeroes 00:24:19.485 ************************************ 00:24:19.485 13:18:25 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:19.485 13:18:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:19.485 13:18:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.485 13:18:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:19.485 ************************************ 00:24:19.485 START TEST bdev_json_nonenclosed 00:24:19.485 ************************************ 00:24:19.485 13:18:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:19.743 [2024-12-06 13:18:26.083933] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:19.743 [2024-12-06 13:18:26.084118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75318 ] 00:24:19.743 [2024-12-06 13:18:26.268943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.001 [2024-12-06 13:18:26.396818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.001 [2024-12-06 13:18:26.396989] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:24:20.001 [2024-12-06 13:18:26.397024] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:20.001 [2024-12-06 13:18:26.397041] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:20.258 00:24:20.258 real 0m0.665s 00:24:20.258 user 0m0.449s 00:24:20.258 sys 0m0.110s 00:24:20.258 13:18:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.258 13:18:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:24:20.258 ************************************ 00:24:20.258 END TEST bdev_json_nonenclosed 00:24:20.258 ************************************ 00:24:20.258 13:18:26 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:20.258 13:18:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:20.258 13:18:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.258 13:18:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:20.258 ************************************ 00:24:20.258 START TEST bdev_json_nonarray 00:24:20.258 ************************************ 00:24:20.258 13:18:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:20.258 [2024-12-06 13:18:26.778684] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:20.258 [2024-12-06 13:18:26.778853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75348 ] 00:24:20.516 [2024-12-06 13:18:26.952601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.773 [2024-12-06 13:18:27.061627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.773 [2024-12-06 13:18:27.061757] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:24:20.773 [2024-12-06 13:18:27.061785] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:20.773 [2024-12-06 13:18:27.061800] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:21.030 00:24:21.030 real 0m0.683s 00:24:21.030 user 0m0.450s 00:24:21.030 sys 0m0.127s 00:24:21.030 13:18:27 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.030 13:18:27 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:24:21.030 ************************************ 00:24:21.030 END TEST bdev_json_nonarray 00:24:21.030 ************************************ 00:24:21.030 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:24:21.030 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:24:21.030 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:24:21.030 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:24:21.030 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:24:21.030 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:24:21.031 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:21.031 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:24:21.031 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:24:21.031 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:24:21.031 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:24:21.031 13:18:27 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:21.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:22.162 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:22.419 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:22.419 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:24:22.419 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:24:22.677 00:24:22.677 real 0m59.151s 00:24:22.677 user 1m43.893s 00:24:22.677 sys 0m27.657s 00:24:22.677 13:18:28 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:22.677 13:18:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:22.677 ************************************ 00:24:22.677 END TEST blockdev_xnvme 00:24:22.677 ************************************ 00:24:22.677 13:18:28 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:24:22.677 13:18:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:22.677 13:18:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.677 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:24:22.677 ************************************ 00:24:22.677 START TEST ublk 00:24:22.677 ************************************ 00:24:22.677 13:18:28 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:24:22.677 * Looking for test storage... 00:24:22.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:24:22.677 13:18:29 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:22.677 13:18:29 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:24:22.677 13:18:29 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:22.677 13:18:29 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:22.677 13:18:29 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.677 13:18:29 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.678 13:18:29 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.678 13:18:29 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.678 13:18:29 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.678 13:18:29 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.678 13:18:29 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.678 13:18:29 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.678 13:18:29 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.678 13:18:29 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.678 13:18:29 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.678 13:18:29 ublk -- scripts/common.sh@344 -- # case "$op" in 00:24:22.678 13:18:29 ublk -- scripts/common.sh@345 -- # : 1 00:24:22.678 13:18:29 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.678 13:18:29 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.678 13:18:29 ublk -- scripts/common.sh@365 -- # decimal 1 00:24:22.678 13:18:29 ublk -- scripts/common.sh@353 -- # local d=1 00:24:22.678 13:18:29 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.678 13:18:29 ublk -- scripts/common.sh@355 -- # echo 1 00:24:22.678 13:18:29 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:24:22.678 13:18:29 ublk -- scripts/common.sh@366 -- # decimal 2 00:24:22.678 13:18:29 ublk -- scripts/common.sh@353 -- # local d=2 00:24:22.678 13:18:29 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.678 13:18:29 ublk -- scripts/common.sh@355 -- # echo 2 00:24:22.678 13:18:29 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:24:22.678 13:18:29 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:22.678 13:18:29 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:22.678 13:18:29 ublk -- scripts/common.sh@368 -- # return 0 00:24:22.678 13:18:29 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:22.678 13:18:29 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:22.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.678 --rc genhtml_branch_coverage=1 00:24:22.678 --rc genhtml_function_coverage=1 00:24:22.678 --rc genhtml_legend=1 00:24:22.678 --rc geninfo_all_blocks=1 00:24:22.678 --rc geninfo_unexecuted_blocks=1 00:24:22.678 00:24:22.678 ' 00:24:22.678 13:18:29 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:22.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.678 --rc genhtml_branch_coverage=1 00:24:22.678 --rc genhtml_function_coverage=1 00:24:22.678 --rc genhtml_legend=1 00:24:22.678 --rc geninfo_all_blocks=1 00:24:22.678 --rc geninfo_unexecuted_blocks=1 00:24:22.678 00:24:22.678 ' 00:24:22.678 13:18:29 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:22.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.678 --rc genhtml_branch_coverage=1 00:24:22.678 --rc genhtml_function_coverage=1 00:24:22.678 --rc genhtml_legend=1 00:24:22.678 --rc geninfo_all_blocks=1 00:24:22.678 --rc geninfo_unexecuted_blocks=1 00:24:22.678 00:24:22.678 ' 00:24:22.678 13:18:29 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:22.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.678 --rc genhtml_branch_coverage=1 00:24:22.678 --rc genhtml_function_coverage=1 00:24:22.678 --rc genhtml_legend=1 00:24:22.678 --rc geninfo_all_blocks=1 00:24:22.678 --rc geninfo_unexecuted_blocks=1 00:24:22.678 00:24:22.678 ' 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:24:22.678 13:18:29 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:24:22.678 13:18:29 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:24:22.678 13:18:29 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:24:22.678 13:18:29 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:24:22.678 13:18:29 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:24:22.678 13:18:29 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:24:22.678 13:18:29 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:24:22.678 13:18:29 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:24:22.678 13:18:29 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:24:22.678 13:18:29 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:22.678 13:18:29 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.678 13:18:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:22.678 ************************************ 00:24:22.678 START TEST test_save_ublk_config 00:24:22.678 ************************************ 00:24:22.678 13:18:29 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:24:22.678 13:18:29 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:24:22.936 13:18:29 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75638 00:24:22.936 13:18:29 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:24:22.936 13:18:29 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:24:22.936 13:18:29 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75638 00:24:22.936 13:18:29 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75638 ']' 00:24:22.936 13:18:29 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.936 13:18:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.936 13:18:29 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.936 13:18:29 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.936 13:18:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:22.936 [2024-12-06 13:18:29.347638] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:22.936 [2024-12-06 13:18:29.347800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75638 ] 00:24:23.194 [2024-12-06 13:18:29.520619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.194 [2024-12-06 13:18:29.641339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.129 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.129 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:24:24.129 13:18:30 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:24:24.129 13:18:30 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:24:24.129 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.129 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:24.129 [2024-12-06 13:18:30.472882] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:24.129 [2024-12-06 13:18:30.473962] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:24.129 malloc0 00:24:24.129 [2024-12-06 13:18:30.553050] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:24:24.129 [2024-12-06 13:18:30.553200] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:24:24.129 [2024-12-06 13:18:30.553220] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:24.129 [2024-12-06 13:18:30.553231] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:24.129 [2024-12-06 13:18:30.561111] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:24.129 [2024-12-06 13:18:30.561160] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:24.129 [2024-12-06 13:18:30.568877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:24.129 [2024-12-06 13:18:30.569028] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:24.129 [2024-12-06 13:18:30.585901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:24.129 0 00:24:24.129 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.129 13:18:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:24:24.129 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.129 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:24.389 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.389 13:18:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:24:24.389 "subsystems": [ 00:24:24.389 { 00:24:24.389 "subsystem": "fsdev", 00:24:24.389 "config": [ 00:24:24.389 { 00:24:24.389 "method": "fsdev_set_opts", 00:24:24.389 "params": { 00:24:24.389 "fsdev_io_pool_size": 65535, 00:24:24.389 "fsdev_io_cache_size": 256 00:24:24.389 } 00:24:24.389 } 00:24:24.389 ] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "keyring", 00:24:24.389 "config": [] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "iobuf", 00:24:24.389 "config": [ 00:24:24.389 { 00:24:24.389 "method": "iobuf_set_options", 00:24:24.389 "params": { 00:24:24.389 "small_pool_count": 8192, 00:24:24.389 "large_pool_count": 1024, 00:24:24.389 "small_bufsize": 8192, 00:24:24.389 "large_bufsize": 135168, 00:24:24.389 "enable_numa": false 00:24:24.389 } 00:24:24.389 } 00:24:24.389 ] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "sock", 00:24:24.389 "config": [ 00:24:24.389 { 00:24:24.389 "method": "sock_set_default_impl", 00:24:24.389 "params": { 00:24:24.389 "impl_name": "posix" 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "sock_impl_set_options", 00:24:24.389 "params": { 00:24:24.389 "impl_name": "ssl", 00:24:24.389 "recv_buf_size": 4096, 00:24:24.389 "send_buf_size": 4096, 00:24:24.389 "enable_recv_pipe": true, 00:24:24.389 "enable_quickack": false, 00:24:24.389 "enable_placement_id": 0, 00:24:24.389 "enable_zerocopy_send_server": true, 00:24:24.389 "enable_zerocopy_send_client": false, 00:24:24.389 "zerocopy_threshold": 0, 00:24:24.389 "tls_version": 0, 00:24:24.389 "enable_ktls": false 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "sock_impl_set_options", 00:24:24.389 "params": { 00:24:24.389 "impl_name": "posix", 00:24:24.389 "recv_buf_size": 2097152, 00:24:24.389 "send_buf_size": 2097152, 00:24:24.389 "enable_recv_pipe": true, 00:24:24.389 "enable_quickack": false, 00:24:24.389 "enable_placement_id": 0, 00:24:24.389 "enable_zerocopy_send_server": true, 00:24:24.389 "enable_zerocopy_send_client": false, 00:24:24.389 "zerocopy_threshold": 0, 00:24:24.389 "tls_version": 0, 00:24:24.389 "enable_ktls": false 00:24:24.389 } 00:24:24.389 } 00:24:24.389 ] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "vmd", 00:24:24.389 "config": [] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "accel", 00:24:24.389 "config": [ 00:24:24.389 { 00:24:24.389 "method": "accel_set_options", 00:24:24.389 "params": { 00:24:24.389 "small_cache_size": 128, 00:24:24.389 "large_cache_size": 16, 00:24:24.389 "task_count": 2048, 00:24:24.389 "sequence_count": 2048, 00:24:24.389 "buf_count": 2048 00:24:24.389 } 00:24:24.389 } 00:24:24.389 ] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "bdev", 00:24:24.389 "config": [ 00:24:24.389 { 00:24:24.389 "method": "bdev_set_options", 00:24:24.389 "params": { 00:24:24.389 "bdev_io_pool_size": 65535, 00:24:24.389 "bdev_io_cache_size": 256, 00:24:24.389 "bdev_auto_examine": true, 00:24:24.389 "iobuf_small_cache_size": 128, 00:24:24.389 "iobuf_large_cache_size": 16 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "bdev_raid_set_options", 00:24:24.389 "params": { 00:24:24.389 "process_window_size_kb": 1024, 00:24:24.389 "process_max_bandwidth_mb_sec": 0 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "bdev_iscsi_set_options", 00:24:24.389 "params": { 00:24:24.389 "timeout_sec": 30 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "bdev_nvme_set_options", 00:24:24.389 "params": { 00:24:24.389 "action_on_timeout": "none", 00:24:24.389 "timeout_us": 0, 00:24:24.389 "timeout_admin_us": 0, 00:24:24.389 "keep_alive_timeout_ms": 10000, 00:24:24.389 "arbitration_burst": 0, 00:24:24.389 "low_priority_weight": 0, 00:24:24.389 "medium_priority_weight": 0, 00:24:24.389 "high_priority_weight": 0, 00:24:24.389 "nvme_adminq_poll_period_us": 10000, 00:24:24.389 "nvme_ioq_poll_period_us": 0, 00:24:24.389 "io_queue_requests": 0, 00:24:24.389 "delay_cmd_submit": true, 00:24:24.389 "transport_retry_count": 4, 00:24:24.389 "bdev_retry_count": 3, 00:24:24.389 "transport_ack_timeout": 0, 00:24:24.389 "ctrlr_loss_timeout_sec": 0, 00:24:24.389 "reconnect_delay_sec": 0, 00:24:24.389 "fast_io_fail_timeout_sec": 0, 00:24:24.389 "disable_auto_failback": false, 00:24:24.389 "generate_uuids": false, 00:24:24.389 "transport_tos": 0, 00:24:24.389 "nvme_error_stat": false, 00:24:24.389 "rdma_srq_size": 0, 00:24:24.389 "io_path_stat": false, 00:24:24.389 "allow_accel_sequence": false, 00:24:24.389 "rdma_max_cq_size": 0, 00:24:24.389 "rdma_cm_event_timeout_ms": 0, 00:24:24.389 "dhchap_digests": [ 00:24:24.389 "sha256", 00:24:24.389 "sha384", 00:24:24.389 "sha512" 00:24:24.389 ], 00:24:24.389 "dhchap_dhgroups": [ 00:24:24.389 "null", 00:24:24.389 "ffdhe2048", 00:24:24.389 "ffdhe3072", 00:24:24.389 "ffdhe4096", 00:24:24.389 "ffdhe6144", 00:24:24.389 "ffdhe8192" 00:24:24.389 ] 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "bdev_nvme_set_hotplug", 00:24:24.389 "params": { 00:24:24.389 "period_us": 100000, 00:24:24.389 "enable": false 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "bdev_malloc_create", 00:24:24.389 "params": { 00:24:24.389 "name": "malloc0", 00:24:24.389 "num_blocks": 8192, 00:24:24.389 "block_size": 4096, 00:24:24.389 "physical_block_size": 4096, 00:24:24.389 "uuid": "6aba7c3b-1fe7-4070-a87d-bc0b90540c6d", 00:24:24.389 "optimal_io_boundary": 0, 00:24:24.389 "md_size": 0, 00:24:24.389 "dif_type": 0, 00:24:24.389 "dif_is_head_of_md": false, 00:24:24.389 "dif_pi_format": 0 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "bdev_wait_for_examine" 00:24:24.389 } 00:24:24.389 ] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "scsi", 00:24:24.389 "config": null 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "scheduler", 00:24:24.389 "config": [ 00:24:24.389 { 00:24:24.389 "method": "framework_set_scheduler", 00:24:24.389 "params": { 00:24:24.389 "name": "static" 00:24:24.389 } 00:24:24.389 } 00:24:24.389 ] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "vhost_scsi", 00:24:24.389 "config": [] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "vhost_blk", 00:24:24.389 "config": [] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "ublk", 00:24:24.389 "config": [ 00:24:24.389 { 00:24:24.389 "method": "ublk_create_target", 00:24:24.389 "params": { 00:24:24.389 "cpumask": "1" 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "ublk_start_disk", 00:24:24.389 "params": { 00:24:24.389 "bdev_name": "malloc0", 00:24:24.389 "ublk_id": 0, 00:24:24.389 "num_queues": 1, 00:24:24.389 "queue_depth": 128 00:24:24.389 } 00:24:24.389 } 00:24:24.389 ] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "nbd", 00:24:24.389 "config": [] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "nvmf", 00:24:24.389 "config": [ 00:24:24.389 { 00:24:24.389 "method": "nvmf_set_config", 00:24:24.389 "params": { 00:24:24.389 "discovery_filter": "match_any", 00:24:24.389 "admin_cmd_passthru": { 00:24:24.389 "identify_ctrlr": false 00:24:24.389 }, 00:24:24.389 "dhchap_digests": [ 00:24:24.389 "sha256", 00:24:24.389 "sha384", 00:24:24.389 "sha512" 00:24:24.389 ], 00:24:24.389 "dhchap_dhgroups": [ 00:24:24.389 "null", 00:24:24.389 "ffdhe2048", 00:24:24.389 "ffdhe3072", 00:24:24.389 "ffdhe4096", 00:24:24.389 "ffdhe6144", 00:24:24.389 "ffdhe8192" 00:24:24.389 ] 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "nvmf_set_max_subsystems", 00:24:24.389 "params": { 00:24:24.389 "max_subsystems": 1024 00:24:24.389 } 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "method": "nvmf_set_crdt", 00:24:24.389 "params": { 00:24:24.389 "crdt1": 0, 00:24:24.389 "crdt2": 0, 00:24:24.389 "crdt3": 0 00:24:24.389 } 00:24:24.389 } 00:24:24.389 ] 00:24:24.389 }, 00:24:24.389 { 00:24:24.389 "subsystem": "iscsi", 00:24:24.389 "config": [ 00:24:24.389 { 00:24:24.389 "method": "iscsi_set_options", 00:24:24.389 "params": { 00:24:24.390 "node_base": "iqn.2016-06.io.spdk", 00:24:24.390 "max_sessions": 128, 00:24:24.390 "max_connections_per_session": 2, 00:24:24.390 "max_queue_depth": 64, 00:24:24.390 "default_time2wait": 2, 00:24:24.390 "default_time2retain": 20, 00:24:24.390 "first_burst_length": 8192, 00:24:24.390 "immediate_data": true, 00:24:24.390 "allow_duplicated_isid": false, 00:24:24.390 "error_recovery_level": 0, 00:24:24.390 "nop_timeout": 60, 00:24:24.390 "nop_in_interval": 30, 00:24:24.390 "disable_chap": false, 00:24:24.390 "require_chap": false, 00:24:24.390 "mutual_chap": false, 00:24:24.390 "chap_group": 0, 00:24:24.390 "max_large_datain_per_connection": 64, 00:24:24.390 "max_r2t_per_connection": 4, 00:24:24.390 "pdu_pool_size": 36864, 00:24:24.390 "immediate_data_pool_size": 16384, 00:24:24.390 "data_out_pool_size": 2048 00:24:24.390 } 00:24:24.390 } 00:24:24.390 ] 00:24:24.390 } 00:24:24.390 ] 00:24:24.390 }' 00:24:24.390 13:18:30 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75638 00:24:24.390 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75638 ']' 00:24:24.390 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75638 00:24:24.390 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:24:24.390 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.390 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75638 00:24:24.691 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.691 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.691 killing process with pid 75638 00:24:24.691 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75638' 00:24:24.691 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75638 00:24:24.691 13:18:30 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75638 00:24:26.068 [2024-12-06 13:18:32.208620] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:26.068 [2024-12-06 13:18:32.248985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:26.068 [2024-12-06 13:18:32.249184] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:26.068 [2024-12-06 13:18:32.259939] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:26.069 [2024-12-06 13:18:32.260032] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:26.069 [2024-12-06 13:18:32.260055] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:26.069 [2024-12-06 13:18:32.260092] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:26.069 [2024-12-06 13:18:32.260307] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:27.972 13:18:33 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75703 00:24:27.972 13:18:33 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75703 00:24:27.972 13:18:33 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75703 ']' 00:24:27.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.972 13:18:33 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.972 13:18:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:24:27.972 13:18:33 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.972 13:18:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:24:27.972 "subsystems": [ 00:24:27.972 { 00:24:27.972 "subsystem": "fsdev", 00:24:27.972 "config": [ 00:24:27.972 { 00:24:27.972 "method": "fsdev_set_opts", 00:24:27.972 "params": { 00:24:27.972 "fsdev_io_pool_size": 65535, 00:24:27.972 "fsdev_io_cache_size": 256 00:24:27.972 } 00:24:27.972 } 00:24:27.972 ] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "keyring", 00:24:27.972 "config": [] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "iobuf", 00:24:27.972 "config": [ 00:24:27.972 { 00:24:27.972 "method": "iobuf_set_options", 00:24:27.972 "params": { 00:24:27.972 "small_pool_count": 8192, 00:24:27.972 "large_pool_count": 1024, 00:24:27.972 "small_bufsize": 8192, 00:24:27.972 "large_bufsize": 135168, 00:24:27.972 "enable_numa": false 00:24:27.972 } 00:24:27.972 } 00:24:27.972 ] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "sock", 00:24:27.972 "config": [ 00:24:27.972 { 00:24:27.972 "method": "sock_set_default_impl", 00:24:27.972 "params": { 00:24:27.972 "impl_name": "posix" 00:24:27.972 } 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "method": "sock_impl_set_options", 00:24:27.972 "params": { 00:24:27.972 "impl_name": "ssl", 00:24:27.972 "recv_buf_size": 4096, 00:24:27.972 "send_buf_size": 4096, 00:24:27.972 "enable_recv_pipe": true, 00:24:27.972 "enable_quickack": false, 00:24:27.972 "enable_placement_id": 0, 00:24:27.972 "enable_zerocopy_send_server": true, 00:24:27.972 "enable_zerocopy_send_client": false, 00:24:27.972 "zerocopy_threshold": 0, 00:24:27.972 "tls_version": 0, 00:24:27.972 "enable_ktls": false 00:24:27.972 } 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "method": "sock_impl_set_options", 00:24:27.972 "params": { 00:24:27.972 "impl_name": "posix", 00:24:27.972 "recv_buf_size": 2097152, 00:24:27.972 "send_buf_size": 2097152, 00:24:27.972 "enable_recv_pipe": true, 00:24:27.972 "enable_quickack": false, 00:24:27.972 "enable_placement_id": 0, 00:24:27.972 "enable_zerocopy_send_server": true, 00:24:27.972 "enable_zerocopy_send_client": false, 00:24:27.972 "zerocopy_threshold": 0, 00:24:27.972 "tls_version": 0, 00:24:27.972 "enable_ktls": false 00:24:27.972 } 00:24:27.972 } 00:24:27.972 ] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "vmd", 00:24:27.972 "config": [] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "accel", 00:24:27.972 "config": [ 00:24:27.972 { 00:24:27.972 "method": "accel_set_options", 00:24:27.972 "params": { 00:24:27.972 "small_cache_size": 128, 00:24:27.972 "large_cache_size": 16, 00:24:27.972 "task_count": 2048, 00:24:27.972 "sequence_count": 2048, 00:24:27.972 "buf_count": 2048 00:24:27.972 } 00:24:27.972 } 00:24:27.972 ] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "bdev", 00:24:27.972 "config": [ 00:24:27.972 { 00:24:27.972 "method": "bdev_set_options", 00:24:27.972 "params": { 00:24:27.972 "bdev_io_pool_size": 65535, 00:24:27.972 "bdev_io_cache_size": 256, 00:24:27.972 "bdev_auto_examine": true, 00:24:27.972 "iobuf_small_cache_size": 128, 00:24:27.972 "iobuf_large_cache_size": 16 00:24:27.972 } 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "method": "bdev_raid_set_options", 00:24:27.972 "params": { 00:24:27.972 "process_window_size_kb": 1024, 00:24:27.972 "process_max_bandwidth_mb_sec": 0 00:24:27.972 } 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "method": "bdev_iscsi_set_options", 00:24:27.972 "params": { 00:24:27.972 "timeout_sec": 30 00:24:27.972 } 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "method": "bdev_nvme_set_options", 00:24:27.972 "params": { 00:24:27.972 "action_on_timeout": "none", 00:24:27.972 "timeout_us": 0, 00:24:27.972 "timeout_admin_us": 0, 00:24:27.972 "keep_alive_timeout_ms": 10000, 00:24:27.972 "arbitration_burst": 0, 00:24:27.972 "low_priority_weight": 0, 00:24:27.972 "medium_priority_weight": 0, 00:24:27.972 "high_priority_weight": 0, 00:24:27.972 "nvme_adminq_poll_period_us": 10000, 00:24:27.972 "nvme_ioq_poll_period_us": 0, 00:24:27.972 "io_queue_requests": 0, 00:24:27.972 "delay_cmd_submit": true, 00:24:27.972 "transport_retry_count": 4, 00:24:27.972 "bdev_retry_count": 3, 00:24:27.972 "transport_ack_timeout": 0, 00:24:27.972 "ctrlr_loss_timeout_sec": 0, 00:24:27.972 "reconnect_delay_sec": 0, 00:24:27.972 "fast_io_fail_timeout_sec": 0, 00:24:27.972 "disable_auto_failback": false, 00:24:27.972 "generate_uuids": false, 00:24:27.972 "transport_tos": 0, 00:24:27.972 "nvme_error_stat": false, 00:24:27.972 "rdma_srq_size": 0, 00:24:27.972 "io_path_stat": false, 00:24:27.972 "allow_accel_sequence": false, 00:24:27.972 "rdma_max_cq_size": 0, 00:24:27.972 "rdma_cm_event_timeout_ms": 0, 00:24:27.972 "dhchap_digests": [ 00:24:27.972 "sha256", 00:24:27.972 "sha384", 00:24:27.972 "sha512" 00:24:27.972 ], 00:24:27.972 "dhchap_dhgroups": [ 00:24:27.972 "null", 00:24:27.972 "ffdhe2048", 00:24:27.972 "ffdhe3072", 00:24:27.972 "ffdhe4096", 00:24:27.972 "ffdhe6144", 00:24:27.972 "ffdhe8192" 00:24:27.972 ] 00:24:27.972 } 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "method": "bdev_nvme_set_hotplug", 00:24:27.972 "params": { 00:24:27.972 "period_us": 100000, 00:24:27.972 "enable": false 00:24:27.972 } 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "method": "bdev_malloc_create", 00:24:27.972 "params": { 00:24:27.972 "name": "malloc0", 00:24:27.972 "num_blocks": 8192, 00:24:27.972 "block_size": 4096, 00:24:27.972 "physical_block_size": 4096, 00:24:27.972 "uuid": "6aba7c3b-1fe7-4070-a87d-bc0b90540c6d", 00:24:27.972 "optimal_io_boundary": 0, 00:24:27.972 "md_size": 0, 00:24:27.972 "dif_type": 0, 00:24:27.972 "dif_is_head_of_md": false, 00:24:27.972 "dif_pi_format": 0 00:24:27.972 } 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "method": "bdev_wait_for_examine" 00:24:27.972 } 00:24:27.972 ] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "scsi", 00:24:27.972 "config": null 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "scheduler", 00:24:27.972 "config": [ 00:24:27.972 { 00:24:27.972 "method": "framework_set_scheduler", 00:24:27.972 "params": { 00:24:27.972 "name": "static" 00:24:27.972 } 00:24:27.972 } 00:24:27.972 ] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "vhost_scsi", 00:24:27.972 "config": [] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "vhost_blk", 00:24:27.972 "config": [] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "ublk", 00:24:27.972 "config": [ 00:24:27.972 { 00:24:27.972 "method": "ublk_create_target", 00:24:27.972 "params": { 00:24:27.972 "cpumask": "1" 00:24:27.972 } 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "method": "ublk_start_disk", 00:24:27.972 "params": { 00:24:27.972 "bdev_name": "malloc0", 00:24:27.972 "ublk_id": 0, 00:24:27.972 "num_queues": 1, 00:24:27.972 "queue_depth": 128 00:24:27.972 } 00:24:27.972 } 00:24:27.972 ] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "nbd", 00:24:27.972 "config": [] 00:24:27.972 }, 00:24:27.972 { 00:24:27.972 "subsystem": "nvmf", 00:24:27.972 "config": [ 00:24:27.972 { 00:24:27.972 "method": "nvmf_set_config", 00:24:27.972 "params": { 00:24:27.972 "discovery_filter": "match_any", 00:24:27.972 "admin_cmd_passthru": { 00:24:27.972 "identify_ctrlr": false 00:24:27.972 }, 00:24:27.972 "dhchap_digests": [ 00:24:27.972 "sha256", 00:24:27.973 "sha384", 00:24:27.973 "sha512" 00:24:27.973 ], 00:24:27.973 "dhchap_dhgroups": [ 00:24:27.973 "null", 00:24:27.973 "ffdhe2048", 00:24:27.973 "ffdhe3072", 00:24:27.973 "ffdhe4096", 00:24:27.973 "ffdhe6144", 00:24:27.973 "ffdhe8192" 00:24:27.973 ] 00:24:27.973 } 00:24:27.973 }, 00:24:27.973 { 00:24:27.973 "method": "nvmf_set_max_subsystems", 00:24:27.973 "params": { 00:24:27.973 "max_subsystems": 1024 00:24:27.973 } 00:24:27.973 }, 00:24:27.973 { 00:24:27.973 "method": "nvmf_set_crdt", 00:24:27.973 "params": { 00:24:27.973 "crdt1": 0, 00:24:27.973 "crdt2": 0, 00:24:27.973 "crdt3": 0 00:24:27.973 } 00:24:27.973 } 00:24:27.973 ] 00:24:27.973 }, 00:24:27.973 { 00:24:27.973 "subsystem": "iscsi", 00:24:27.973 "config": [ 00:24:27.973 { 00:24:27.973 "method": "iscsi_set_options", 00:24:27.973 "params": { 00:24:27.973 "node_base": "iqn.2016-06.io.spdk", 00:24:27.973 "max_sessions": 128, 00:24:27.973 "max_connections_per_session": 2, 00:24:27.973 "max_queue_depth": 64, 00:24:27.973 "default_time2wait": 2, 00:24:27.973 "default_time2retain": 20, 00:24:27.973 "first_burst_length": 8192, 00:24:27.973 "immediate_data": true, 00:24:27.973 "allow_duplicated_isid": false, 00:24:27.973 "error_recovery_level": 0, 00:24:27.973 "nop_timeout": 60, 00:24:27.973 "nop_in_interval": 30, 00:24:27.973 "disable_chap": false, 00:24:27.973 "require_chap": false, 00:24:27.973 "mutual_chap": false, 00:24:27.973 "chap_group": 0, 00:24:27.973 "max_large_datain_per_connection": 64, 00:24:27.973 "max_r2t_per_connection": 4, 00:24:27.973 "pdu_pool_size": 36864, 00:24:27.973 "immediate_data_pool_size": 16384, 00:24:27.973 "data_out_pool_size": 2048 00:24:27.973 } 00:24:27.973 } 00:24:27.973 ] 00:24:27.973 } 00:24:27.973 ] 00:24:27.973 }' 00:24:27.973 13:18:33 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.973 13:18:33 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.973 13:18:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:27.973 [2024-12-06 13:18:34.136761] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:27.973 [2024-12-06 13:18:34.137021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75703 ] 00:24:27.973 [2024-12-06 13:18:34.315357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.973 [2024-12-06 13:18:34.419442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.908 [2024-12-06 13:18:35.368864] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:28.908 [2024-12-06 13:18:35.369977] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:28.908 [2024-12-06 13:18:35.377031] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:24:28.908 [2024-12-06 13:18:35.377159] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:24:28.908 [2024-12-06 13:18:35.377178] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:28.908 [2024-12-06 13:18:35.377187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:28.908 [2024-12-06 13:18:35.385941] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:28.908 [2024-12-06 13:18:35.385975] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:28.908 [2024-12-06 13:18:35.392884] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:28.908 [2024-12-06 13:18:35.393017] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:28.908 [2024-12-06 13:18:35.409875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:29.164 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.164 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:24:29.164 13:18:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:24:29.164 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.164 13:18:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:24:29.164 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:29.164 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.164 13:18:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:29.164 13:18:35 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:24:29.164 13:18:35 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75703 00:24:29.165 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75703 ']' 00:24:29.165 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75703 00:24:29.165 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:24:29.165 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.165 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75703 00:24:29.165 killing process with pid 75703 00:24:29.165 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.165 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.165 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75703' 00:24:29.165 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75703 00:24:29.165 13:18:35 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75703 00:24:31.067 [2024-12-06 13:18:37.237321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:31.067 [2024-12-06 13:18:37.261965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:31.067 [2024-12-06 13:18:37.262140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:31.067 [2024-12-06 13:18:37.271914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:31.067 [2024-12-06 13:18:37.271988] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:31.067 [2024-12-06 13:18:37.272003] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:31.067 [2024-12-06 13:18:37.272049] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:31.067 [2024-12-06 13:18:37.272238] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:32.968 13:18:39 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:24:32.968 00:24:32.968 real 0m9.803s 00:24:32.968 user 0m7.591s 00:24:32.968 sys 0m3.269s 00:24:32.968 ************************************ 00:24:32.968 END TEST test_save_ublk_config 00:24:32.968 ************************************ 00:24:32.968 13:18:39 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.968 13:18:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:32.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.968 13:18:39 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75790 00:24:32.968 13:18:39 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:32.968 13:18:39 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.968 13:18:39 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75790 00:24:32.968 13:18:39 ublk -- common/autotest_common.sh@835 -- # '[' -z 75790 ']' 00:24:32.968 13:18:39 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.968 13:18:39 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.968 13:18:39 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.968 13:18:39 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.968 13:18:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:32.968 [2024-12-06 13:18:39.198963] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:32.968 [2024-12-06 13:18:39.199131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75790 ] 00:24:32.968 [2024-12-06 13:18:39.399305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:33.225 [2024-12-06 13:18:39.529288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.225 [2024-12-06 13:18:39.529289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.161 13:18:40 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.161 13:18:40 ublk -- common/autotest_common.sh@868 -- # return 0 00:24:34.161 13:18:40 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:24:34.161 13:18:40 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:34.161 13:18:40 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.161 13:18:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.161 ************************************ 00:24:34.161 START TEST test_create_ublk 00:24:34.161 ************************************ 00:24:34.161 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:24:34.161 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:24:34.161 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.161 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.161 [2024-12-06 13:18:40.390879] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:34.161 [2024-12-06 13:18:40.393395] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:34.161 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.161 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:24:34.161 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:24:34.161 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.161 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.161 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.161 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:24:34.161 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:34.161 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.161 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.161 [2024-12-06 13:18:40.660158] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:34.161 [2024-12-06 13:18:40.660673] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:34.161 [2024-12-06 13:18:40.660703] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:34.161 [2024-12-06 13:18:40.660714] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:34.161 [2024-12-06 13:18:40.667903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:34.161 [2024-12-06 13:18:40.667958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:34.161 [2024-12-06 13:18:40.677901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:34.161 [2024-12-06 13:18:40.678698] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:34.420 [2024-12-06 13:18:40.695913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:34.420 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:24:34.420 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.420 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.420 13:18:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:24:34.420 { 00:24:34.420 "ublk_device": "/dev/ublkb0", 00:24:34.420 "id": 0, 00:24:34.420 "queue_depth": 512, 00:24:34.420 "num_queues": 4, 00:24:34.420 "bdev_name": "Malloc0" 00:24:34.420 } 00:24:34.420 ]' 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:24:34.420 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:24:34.678 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:34.678 13:18:40 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:34.678 13:18:40 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:24:34.678 fio: verification read phase will never start because write phase uses all of runtime 00:24:34.678 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:24:34.678 fio-3.35 00:24:34.678 Starting 1 process 00:24:46.886 00:24:46.886 fio_test: (groupid=0, jobs=1): err= 0: pid=75837: Fri Dec 6 13:18:51 2024 00:24:46.886 write: IOPS=10.5k, BW=40.9MiB/s (42.8MB/s)(409MiB/10001msec); 0 zone resets 00:24:46.886 clat (usec): min=62, max=7927, avg=93.73, stdev=163.26 00:24:46.886 lat (usec): min=62, max=7929, avg=94.70, stdev=163.30 00:24:46.886 clat percentiles (usec): 00:24:46.886 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 76], 00:24:46.886 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 82], 00:24:46.886 | 70.00th=[ 85], 80.00th=[ 91], 90.00th=[ 103], 95.00th=[ 116], 00:24:46.886 | 99.00th=[ 141], 99.50th=[ 178], 99.90th=[ 3261], 99.95th=[ 3621], 00:24:46.886 | 99.99th=[ 4015] 00:24:46.886 bw ( KiB/s): min=15960, max=47856, per=99.35%, avg=41568.42, stdev=6738.18, samples=19 00:24:46.886 iops : min= 3990, max=11964, avg=10392.11, stdev=1684.54, samples=19 00:24:46.886 lat (usec) : 100=88.81%, 250=10.73%, 500=0.02%, 750=0.02%, 1000=0.04% 00:24:46.886 lat (msec) : 2=0.14%, 4=0.23%, 10=0.01% 00:24:46.886 cpu : usr=3.40%, sys=8.72%, ctx=104617, majf=0, minf=798 00:24:46.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:46.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.886 issued rwts: total=0,104611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:46.886 00:24:46.886 Run status group 0 (all jobs): 00:24:46.886 WRITE: bw=40.9MiB/s (42.8MB/s), 40.9MiB/s-40.9MiB/s (42.8MB/s-42.8MB/s), io=409MiB (428MB), run=10001-10001msec 00:24:46.886 00:24:46.886 Disk stats (read/write): 00:24:46.886 ublkb0: ios=0/103375, merge=0/0, ticks=0/8772, in_queue=8773, util=99.06% 00:24:46.886 13:18:51 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.886 [2024-12-06 13:18:51.249771] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:46.886 [2024-12-06 13:18:51.286380] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:46.886 [2024-12-06 13:18:51.287393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:46.886 [2024-12-06 13:18:51.296872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:46.886 [2024-12-06 13:18:51.297246] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:46.886 [2024-12-06 13:18:51.297272] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.886 13:18:51 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.886 [2024-12-06 13:18:51.304991] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:24:46.886 request: 00:24:46.886 { 00:24:46.886 "ublk_id": 0, 00:24:46.886 "method": "ublk_stop_disk", 00:24:46.886 "req_id": 1 00:24:46.886 } 00:24:46.886 Got JSON-RPC error response 00:24:46.886 response: 00:24:46.886 { 00:24:46.886 "code": -19, 00:24:46.886 "message": "No such device" 00:24:46.886 } 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:46.886 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:46.887 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:46.887 13:18:51 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:24:46.887 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.887 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 [2024-12-06 13:18:51.320997] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:46.887 [2024-12-06 13:18:51.328864] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:46.887 [2024-12-06 13:18:51.328918] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:46.887 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.887 13:18:51 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:46.887 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.887 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.887 13:18:51 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:24:46.887 13:18:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:46.887 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.887 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 13:18:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.887 13:18:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:46.887 13:18:51 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:24:46.887 13:18:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:46.887 13:18:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:46.887 13:18:52 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.887 13:18:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 13:18:52 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.887 13:18:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:46.887 13:18:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:24:46.887 ************************************ 00:24:46.887 END TEST test_create_ublk 00:24:46.887 ************************************ 00:24:46.887 13:18:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:46.887 00:24:46.887 real 0m11.750s 00:24:46.887 user 0m0.842s 00:24:46.887 sys 0m0.974s 00:24:46.887 13:18:52 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.887 13:18:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 13:18:52 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:24:46.887 13:18:52 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:46.887 13:18:52 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.887 13:18:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 ************************************ 00:24:46.887 START TEST test_create_multi_ublk 00:24:46.887 ************************************ 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 [2024-12-06 13:18:52.191862] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:46.887 [2024-12-06 13:18:52.194203] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 [2024-12-06 13:18:52.464110] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:46.887 [2024-12-06 13:18:52.464620] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:46.887 [2024-12-06 13:18:52.464637] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:46.887 [2024-12-06 13:18:52.464651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:46.887 [2024-12-06 13:18:52.471909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:46.887 [2024-12-06 13:18:52.471949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:46.887 [2024-12-06 13:18:52.479903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:46.887 [2024-12-06 13:18:52.480677] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:46.887 [2024-12-06 13:18:52.491209] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.887 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.887 [2024-12-06 13:18:52.747054] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:24:46.887 [2024-12-06 13:18:52.747565] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:24:46.887 [2024-12-06 13:18:52.747593] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:46.887 [2024-12-06 13:18:52.747604] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:46.887 [2024-12-06 13:18:52.755193] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:46.887 [2024-12-06 13:18:52.755240] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:46.887 [2024-12-06 13:18:52.762934] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:46.888 [2024-12-06 13:18:52.763748] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:46.888 [2024-12-06 13:18:52.779901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:46.888 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.888 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:24:46.888 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:46.888 13:18:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:24:46.888 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.888 13:18:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.888 [2024-12-06 13:18:53.043154] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:24:46.888 [2024-12-06 13:18:53.043885] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:24:46.888 [2024-12-06 13:18:53.043907] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:24:46.888 [2024-12-06 13:18:53.043919] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:24:46.888 [2024-12-06 13:18:53.050999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:46.888 [2024-12-06 13:18:53.051053] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:46.888 [2024-12-06 13:18:53.058891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:46.888 [2024-12-06 13:18:53.059678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:24:46.888 [2024-12-06 13:18:53.065412] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.888 [2024-12-06 13:18:53.335108] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:24:46.888 [2024-12-06 13:18:53.335635] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:24:46.888 [2024-12-06 13:18:53.335663] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:24:46.888 [2024-12-06 13:18:53.335673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:24:46.888 [2024-12-06 13:18:53.342942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:46.888 [2024-12-06 13:18:53.343007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:46.888 [2024-12-06 13:18:53.350933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:46.888 [2024-12-06 13:18:53.351793] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:24:46.888 [2024-12-06 13:18:53.366892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:24:46.888 { 00:24:46.888 "ublk_device": "/dev/ublkb0", 00:24:46.888 "id": 0, 00:24:46.888 "queue_depth": 512, 00:24:46.888 "num_queues": 4, 00:24:46.888 "bdev_name": "Malloc0" 00:24:46.888 }, 00:24:46.888 { 00:24:46.888 "ublk_device": "/dev/ublkb1", 00:24:46.888 "id": 1, 00:24:46.888 "queue_depth": 512, 00:24:46.888 "num_queues": 4, 00:24:46.888 "bdev_name": "Malloc1" 00:24:46.888 }, 00:24:46.888 { 00:24:46.888 "ublk_device": "/dev/ublkb2", 00:24:46.888 "id": 2, 00:24:46.888 "queue_depth": 512, 00:24:46.888 "num_queues": 4, 00:24:46.888 "bdev_name": "Malloc2" 00:24:46.888 }, 00:24:46.888 { 00:24:46.888 "ublk_device": "/dev/ublkb3", 00:24:46.888 "id": 3, 00:24:46.888 "queue_depth": 512, 00:24:46.888 "num_queues": 4, 00:24:46.888 "bdev_name": "Malloc3" 00:24:46.888 } 00:24:46.888 ]' 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:46.888 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:24:47.147 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:47.147 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:24:47.147 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:24:47.147 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:24:47.147 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:47.147 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:24:47.147 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:47.147 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:47.405 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:24:47.663 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:24:47.663 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:47.663 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:24:47.663 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:24:47.663 13:18:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:24:47.663 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:24:47.663 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:24:47.663 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:47.663 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:24:47.663 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:47.663 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:47.921 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:48.179 [2024-12-06 13:18:54.503223] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:48.179 [2024-12-06 13:18:54.541965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:48.179 [2024-12-06 13:18:54.543072] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:48.179 [2024-12-06 13:18:54.550957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:48.179 [2024-12-06 13:18:54.551335] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:48.179 [2024-12-06 13:18:54.551362] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:48.179 [2024-12-06 13:18:54.566047] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:24:48.179 [2024-12-06 13:18:54.594413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:48.179 [2024-12-06 13:18:54.595637] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:24:48.179 [2024-12-06 13:18:54.605929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:48.179 [2024-12-06 13:18:54.606319] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:24:48.179 [2024-12-06 13:18:54.606348] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:48.179 [2024-12-06 13:18:54.622154] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:24:48.179 [2024-12-06 13:18:54.658528] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:48.179 [2024-12-06 13:18:54.659896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:24:48.179 [2024-12-06 13:18:54.669973] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:48.179 [2024-12-06 13:18:54.670425] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:24:48.179 [2024-12-06 13:18:54.670464] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.179 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:48.179 [2024-12-06 13:18:54.686077] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:24:48.438 [2024-12-06 13:18:54.719415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:48.438 [2024-12-06 13:18:54.720503] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:24:48.438 [2024-12-06 13:18:54.728933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:48.438 [2024-12-06 13:18:54.729308] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:24:48.438 [2024-12-06 13:18:54.729339] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:24:48.438 13:18:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.438 13:18:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:24:48.817 [2024-12-06 13:18:55.057009] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:48.817 [2024-12-06 13:18:55.064873] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:48.817 [2024-12-06 13:18:55.064944] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:48.817 13:18:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:24:48.817 13:18:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:48.817 13:18:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:48.817 13:18:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.817 13:18:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:49.385 13:18:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.385 13:18:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:49.385 13:18:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:49.385 13:18:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.385 13:18:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:49.642 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.642 13:18:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:49.642 13:18:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:24:49.643 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.643 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:49.901 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.901 13:18:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:49.901 13:18:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:24:49.901 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.901 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:50.158 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.158 13:18:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:24:50.158 13:18:56 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:50.158 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.158 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:50.158 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.158 13:18:56 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:50.158 13:18:56 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:24:50.416 13:18:56 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:50.416 13:18:56 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:50.416 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.416 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:50.416 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.416 13:18:56 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:50.416 13:18:56 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:24:50.416 ************************************ 00:24:50.416 END TEST test_create_multi_ublk 00:24:50.416 ************************************ 00:24:50.416 13:18:56 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:50.416 00:24:50.416 real 0m4.600s 00:24:50.416 user 0m1.445s 00:24:50.416 sys 0m0.164s 00:24:50.416 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.416 13:18:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:50.416 13:18:56 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:50.416 13:18:56 ublk -- ublk/ublk.sh@147 -- # cleanup 00:24:50.416 13:18:56 ublk -- ublk/ublk.sh@130 -- # killprocess 75790 00:24:50.416 13:18:56 ublk -- common/autotest_common.sh@954 -- # '[' -z 75790 ']' 00:24:50.416 13:18:56 ublk -- common/autotest_common.sh@958 -- # kill -0 75790 00:24:50.416 13:18:56 ublk -- common/autotest_common.sh@959 -- # uname 00:24:50.416 13:18:56 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.416 13:18:56 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75790 00:24:50.416 killing process with pid 75790 00:24:50.416 13:18:56 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.416 13:18:56 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.416 13:18:56 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75790' 00:24:50.416 13:18:56 ublk -- common/autotest_common.sh@973 -- # kill 75790 00:24:50.416 13:18:56 ublk -- common/autotest_common.sh@978 -- # wait 75790 00:24:51.790 [2024-12-06 13:18:57.987009] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:51.790 [2024-12-06 13:18:57.987088] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:52.725 00:24:52.725 real 0m30.132s 00:24:52.725 user 0m44.321s 00:24:52.725 sys 0m10.073s 00:24:52.725 13:18:59 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.725 ************************************ 00:24:52.725 END TEST ublk 00:24:52.725 ************************************ 00:24:52.725 13:18:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:52.725 13:18:59 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:52.725 13:18:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:52.725 13:18:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.725 13:18:59 -- common/autotest_common.sh@10 -- # set +x 00:24:52.725 ************************************ 00:24:52.725 START TEST ublk_recovery 00:24:52.725 ************************************ 00:24:52.725 13:18:59 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:52.725 * Looking for test storage... 00:24:52.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:24:52.725 13:18:59 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:52.725 13:18:59 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:52.725 13:18:59 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:52.984 13:18:59 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.984 13:18:59 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:24:52.985 13:18:59 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:24:52.985 13:18:59 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.985 13:18:59 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:24:52.985 13:18:59 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.985 13:18:59 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.985 13:18:59 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.985 13:18:59 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:52.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.985 --rc genhtml_branch_coverage=1 00:24:52.985 --rc genhtml_function_coverage=1 00:24:52.985 --rc genhtml_legend=1 00:24:52.985 --rc geninfo_all_blocks=1 00:24:52.985 --rc geninfo_unexecuted_blocks=1 00:24:52.985 00:24:52.985 ' 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:52.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.985 --rc genhtml_branch_coverage=1 00:24:52.985 --rc genhtml_function_coverage=1 00:24:52.985 --rc genhtml_legend=1 00:24:52.985 --rc geninfo_all_blocks=1 00:24:52.985 --rc geninfo_unexecuted_blocks=1 00:24:52.985 00:24:52.985 ' 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:52.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.985 --rc genhtml_branch_coverage=1 00:24:52.985 --rc genhtml_function_coverage=1 00:24:52.985 --rc genhtml_legend=1 00:24:52.985 --rc geninfo_all_blocks=1 00:24:52.985 --rc geninfo_unexecuted_blocks=1 00:24:52.985 00:24:52.985 ' 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:52.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.985 --rc genhtml_branch_coverage=1 00:24:52.985 --rc genhtml_function_coverage=1 00:24:52.985 --rc genhtml_legend=1 00:24:52.985 --rc geninfo_all_blocks=1 00:24:52.985 --rc geninfo_unexecuted_blocks=1 00:24:52.985 00:24:52.985 ' 00:24:52.985 13:18:59 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:24:52.985 13:18:59 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:24:52.985 13:18:59 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:24:52.985 13:18:59 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:24:52.985 13:18:59 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:24:52.985 13:18:59 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:24:52.985 13:18:59 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:24:52.985 13:18:59 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:24:52.985 13:18:59 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:24:52.985 13:18:59 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:24:52.985 13:18:59 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76202 00:24:52.985 13:18:59 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:52.985 13:18:59 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:52.985 13:18:59 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76202 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76202 ']' 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.985 13:18:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.985 [2024-12-06 13:18:59.479789] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:52.985 [2024-12-06 13:18:59.480188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76202 ] 00:24:53.243 [2024-12-06 13:18:59.677493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:53.502 [2024-12-06 13:18:59.799686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.502 [2024-12-06 13:18:59.799689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.080 13:19:00 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.080 13:19:00 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:54.080 13:19:00 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:24:54.080 13:19:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.080 13:19:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.080 [2024-12-06 13:19:00.590868] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:54.080 [2024-12-06 13:19:00.593317] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:54.080 13:19:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.080 13:19:00 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:54.080 13:19:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.080 13:19:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.336 malloc0 00:24:54.336 13:19:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.336 13:19:00 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:24:54.336 13:19:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.336 13:19:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.336 [2024-12-06 13:19:00.727042] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:24:54.336 [2024-12-06 13:19:00.727176] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:24:54.336 [2024-12-06 13:19:00.727197] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:54.336 [2024-12-06 13:19:00.727207] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:54.336 [2024-12-06 13:19:00.735985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:54.336 [2024-12-06 13:19:00.736015] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:54.336 [2024-12-06 13:19:00.742882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:54.336 [2024-12-06 13:19:00.743056] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:54.337 [2024-12-06 13:19:00.758886] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:54.337 1 00:24:54.337 13:19:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.337 13:19:00 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:24:55.267 13:19:01 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76243 00:24:55.267 13:19:01 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:24:55.267 13:19:01 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:24:55.524 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:55.524 fio-3.35 00:24:55.524 Starting 1 process 00:25:00.838 13:19:06 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76202 00:25:00.838 13:19:06 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:25:06.114 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76202 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:25:06.114 13:19:11 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76348 00:25:06.114 13:19:11 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:25:06.114 13:19:11 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:06.114 13:19:11 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76348 00:25:06.114 13:19:11 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76348 ']' 00:25:06.114 13:19:11 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.114 13:19:11 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.114 13:19:11 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.114 13:19:11 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.114 13:19:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.114 [2024-12-06 13:19:11.908717] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:06.114 [2024-12-06 13:19:11.908939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76348 ] 00:25:06.114 [2024-12-06 13:19:12.121125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:06.114 [2024-12-06 13:19:12.277005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.114 [2024-12-06 13:19:12.277009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.680 13:19:13 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.680 13:19:13 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:25:06.680 13:19:13 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:25:06.680 13:19:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.680 13:19:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.680 [2024-12-06 13:19:13.090964] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:06.680 [2024-12-06 13:19:13.093454] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:06.680 13:19:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.680 13:19:13 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:25:06.680 13:19:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.680 13:19:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.938 malloc0 00:25:06.938 13:19:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.938 13:19:13 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:25:06.938 13:19:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.938 13:19:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.938 [2024-12-06 13:19:13.225173] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:25:06.938 [2024-12-06 13:19:13.225225] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:25:06.939 [2024-12-06 13:19:13.225242] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:25:06.939 [2024-12-06 13:19:13.232911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:25:06.939 [2024-12-06 13:19:13.232942] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:25:06.939 1 00:25:06.939 13:19:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.939 13:19:13 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76243 00:25:07.896 [2024-12-06 13:19:14.232977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:25:07.896 [2024-12-06 13:19:14.240876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:25:07.896 [2024-12-06 13:19:14.240903] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:25:08.830 [2024-12-06 13:19:15.244914] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:25:08.830 [2024-12-06 13:19:15.252920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:25:08.830 [2024-12-06 13:19:15.252967] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:25:09.765 [2024-12-06 13:19:16.252996] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:25:09.765 [2024-12-06 13:19:16.263919] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:25:09.765 [2024-12-06 13:19:16.263947] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:25:09.765 [2024-12-06 13:19:16.263978] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:25:09.765 [2024-12-06 13:19:16.264096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:25:31.693 [2024-12-06 13:19:36.872883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:25:31.693 [2024-12-06 13:19:36.879624] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:25:31.693 [2024-12-06 13:19:36.887100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:25:31.693 [2024-12-06 13:19:36.887131] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:25:58.274 00:25:58.274 fio_test: (groupid=0, jobs=1): err= 0: pid=76246: Fri Dec 6 13:20:02 2024 00:25:58.274 read: IOPS=9898, BW=38.7MiB/s (40.5MB/s)(2320MiB/60002msec) 00:25:58.274 slat (nsec): min=1951, max=265794, avg=6559.16, stdev=2884.01 00:25:58.274 clat (usec): min=976, max=30123k, avg=6509.22, stdev=317484.40 00:25:58.274 lat (usec): min=994, max=30123k, avg=6515.78, stdev=317484.40 00:25:58.274 clat percentiles (msec): 00:25:58.274 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:25:58.274 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 4], 60.00th=[ 4], 00:25:58.274 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:25:58.274 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 11], 00:25:58.274 | 99.99th=[17113] 00:25:58.274 bw ( KiB/s): min=10168, max=83392, per=100.00%, avg=78000.55, stdev=11677.20, samples=60 00:25:58.274 iops : min= 2542, max=20848, avg=19500.12, stdev=2919.30, samples=60 00:25:58.274 write: IOPS=9887, BW=38.6MiB/s (40.5MB/s)(2318MiB/60002msec); 0 zone resets 00:25:58.274 slat (nsec): min=1988, max=224006, avg=6785.69, stdev=2911.44 00:25:58.274 clat (usec): min=853, max=30123k, avg=6413.57, stdev=307880.50 00:25:58.274 lat (usec): min=871, max=30123k, avg=6420.36, stdev=307880.49 00:25:58.274 clat percentiles (msec): 00:25:58.274 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 00:25:58.274 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:25:58.274 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:25:58.274 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 11], 00:25:58.274 | 99.99th=[17113] 00:25:58.274 bw ( KiB/s): min=10600, max=82936, per=100.00%, avg=77925.52, stdev=11634.44, samples=60 00:25:58.274 iops : min= 2650, max=20734, avg=19481.35, stdev=2908.60, samples=60 00:25:58.274 lat (usec) : 1000=0.01% 00:25:58.274 lat (msec) : 2=0.06%, 4=94.29%, 10=5.60%, 20=0.04%, >=2000=0.01% 00:25:58.274 cpu : usr=5.83%, sys=12.56%, ctx=40387, majf=0, minf=13 00:25:58.274 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:25:58.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:58.274 issued rwts: total=593928,593288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.274 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:58.274 00:25:58.274 Run status group 0 (all jobs): 00:25:58.274 READ: bw=38.7MiB/s (40.5MB/s), 38.7MiB/s-38.7MiB/s (40.5MB/s-40.5MB/s), io=2320MiB (2433MB), run=60002-60002msec 00:25:58.274 WRITE: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=2318MiB (2430MB), run=60002-60002msec 00:25:58.274 00:25:58.274 Disk stats (read/write): 00:25:58.274 ublkb1: ios=591550/590985, merge=0/0, ticks=3802917/3674814, in_queue=7477731, util=99.94% 00:25:58.274 13:20:02 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.274 [2024-12-06 13:20:02.041653] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:25:58.274 [2024-12-06 13:20:02.080908] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:58.274 [2024-12-06 13:20:02.081135] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:25:58.274 [2024-12-06 13:20:02.092898] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:58.274 [2024-12-06 13:20:02.093041] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:25:58.274 [2024-12-06 13:20:02.093057] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.274 13:20:02 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.274 [2024-12-06 13:20:02.104987] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:58.274 [2024-12-06 13:20:02.112877] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:58.274 [2024-12-06 13:20:02.112928] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.274 13:20:02 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:25:58.274 13:20:02 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:25:58.274 13:20:02 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76348 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76348 ']' 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76348 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76348 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:58.274 killing process with pid 76348 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76348' 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76348 00:25:58.274 13:20:02 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76348 00:25:58.274 [2024-12-06 13:20:03.588465] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:58.274 [2024-12-06 13:20:03.588752] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:58.530 ************************************ 00:25:58.530 END TEST ublk_recovery 00:25:58.530 ************************************ 00:25:58.530 00:25:58.530 real 1m5.663s 00:25:58.530 user 1m50.620s 00:25:58.530 sys 0m21.059s 00:25:58.530 13:20:04 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:58.530 13:20:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.530 13:20:04 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:25:58.530 13:20:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:25:58.530 13:20:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.530 13:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:58.530 13:20:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:25:58.530 13:20:04 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:58.530 13:20:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:58.530 13:20:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:58.530 13:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:58.530 ************************************ 00:25:58.530 START TEST ftl 00:25:58.530 ************************************ 00:25:58.530 13:20:04 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:58.530 * Looking for test storage... 00:25:58.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:58.530 13:20:05 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:58.530 13:20:05 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:25:58.530 13:20:05 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:58.787 13:20:05 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:58.787 13:20:05 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:58.787 13:20:05 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.787 13:20:05 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.787 13:20:05 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.787 13:20:05 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.787 13:20:05 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.787 13:20:05 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.787 13:20:05 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:25:58.787 13:20:05 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:25:58.787 13:20:05 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:25:58.787 13:20:05 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.787 13:20:05 ftl -- scripts/common.sh@344 -- # case "$op" in 00:25:58.787 13:20:05 ftl -- scripts/common.sh@345 -- # : 1 00:25:58.787 13:20:05 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.787 13:20:05 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.787 13:20:05 ftl -- scripts/common.sh@365 -- # decimal 1 00:25:58.787 13:20:05 ftl -- scripts/common.sh@353 -- # local d=1 00:25:58.787 13:20:05 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.787 13:20:05 ftl -- scripts/common.sh@355 -- # echo 1 00:25:58.787 13:20:05 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.787 13:20:05 ftl -- scripts/common.sh@366 -- # decimal 2 00:25:58.787 13:20:05 ftl -- scripts/common.sh@353 -- # local d=2 00:25:58.787 13:20:05 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:58.787 13:20:05 ftl -- scripts/common.sh@355 -- # echo 2 00:25:58.787 13:20:05 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:25:58.787 13:20:05 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.787 13:20:05 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:58.787 13:20:05 ftl -- scripts/common.sh@368 -- # return 0 00:25:58.787 13:20:05 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:58.787 13:20:05 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:58.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.787 --rc genhtml_branch_coverage=1 00:25:58.787 --rc genhtml_function_coverage=1 00:25:58.787 --rc genhtml_legend=1 00:25:58.787 --rc geninfo_all_blocks=1 00:25:58.787 --rc geninfo_unexecuted_blocks=1 00:25:58.787 00:25:58.787 ' 00:25:58.787 13:20:05 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:58.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.787 --rc genhtml_branch_coverage=1 00:25:58.787 --rc genhtml_function_coverage=1 00:25:58.787 --rc genhtml_legend=1 00:25:58.787 --rc geninfo_all_blocks=1 00:25:58.787 --rc geninfo_unexecuted_blocks=1 00:25:58.787 00:25:58.787 ' 00:25:58.787 13:20:05 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:58.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.787 --rc genhtml_branch_coverage=1 00:25:58.787 --rc genhtml_function_coverage=1 00:25:58.787 --rc genhtml_legend=1 00:25:58.787 --rc geninfo_all_blocks=1 00:25:58.787 --rc geninfo_unexecuted_blocks=1 00:25:58.788 00:25:58.788 ' 00:25:58.788 13:20:05 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:58.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.788 --rc genhtml_branch_coverage=1 00:25:58.788 --rc genhtml_function_coverage=1 00:25:58.788 --rc genhtml_legend=1 00:25:58.788 --rc geninfo_all_blocks=1 00:25:58.788 --rc geninfo_unexecuted_blocks=1 00:25:58.788 00:25:58.788 ' 00:25:58.788 13:20:05 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:58.788 13:20:05 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:58.788 13:20:05 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:58.788 13:20:05 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:58.788 13:20:05 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:58.788 13:20:05 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:58.788 13:20:05 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:58.788 13:20:05 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:58.788 13:20:05 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:58.788 13:20:05 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:58.788 13:20:05 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:58.788 13:20:05 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:58.788 13:20:05 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:58.788 13:20:05 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:58.788 13:20:05 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:58.788 13:20:05 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:58.788 13:20:05 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:58.788 13:20:05 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:58.788 13:20:05 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:58.788 13:20:05 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:58.788 13:20:05 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:58.788 13:20:05 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:58.788 13:20:05 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:58.788 13:20:05 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:58.788 13:20:05 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:58.788 13:20:05 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:58.788 13:20:05 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:58.788 13:20:05 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:58.788 13:20:05 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:58.788 13:20:05 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:58.788 13:20:05 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:25:58.788 13:20:05 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:25:58.788 13:20:05 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:25:58.788 13:20:05 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:25:58.788 13:20:05 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:59.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.301 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:59.301 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:59.301 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:59.301 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:59.301 13:20:05 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77133 00:25:59.301 13:20:05 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:25:59.302 13:20:05 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77133 00:25:59.302 13:20:05 ftl -- common/autotest_common.sh@835 -- # '[' -z 77133 ']' 00:25:59.302 13:20:05 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.302 13:20:05 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.302 13:20:05 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.302 13:20:05 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.302 13:20:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:59.558 [2024-12-06 13:20:05.829148] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:59.558 [2024-12-06 13:20:05.829501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77133 ] 00:25:59.558 [2024-12-06 13:20:06.020795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.816 [2024-12-06 13:20:06.155380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.386 13:20:06 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.386 13:20:06 ftl -- common/autotest_common.sh@868 -- # return 0 00:26:00.386 13:20:06 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:26:00.955 13:20:07 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:26:01.891 13:20:08 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:26:01.891 13:20:08 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:02.458 13:20:08 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:26:02.458 13:20:08 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:26:02.458 13:20:08 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:26:02.717 13:20:09 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:26:02.717 13:20:09 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:26:02.717 13:20:09 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:26:02.717 13:20:09 ftl -- ftl/ftl.sh@50 -- # break 00:26:02.717 13:20:09 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:26:02.717 13:20:09 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:26:02.717 13:20:09 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:26:02.717 13:20:09 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:26:02.976 13:20:09 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:26:02.976 13:20:09 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:26:02.976 13:20:09 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:26:02.976 13:20:09 ftl -- ftl/ftl.sh@63 -- # break 00:26:02.976 13:20:09 ftl -- ftl/ftl.sh@66 -- # killprocess 77133 00:26:02.976 13:20:09 ftl -- common/autotest_common.sh@954 -- # '[' -z 77133 ']' 00:26:02.976 13:20:09 ftl -- common/autotest_common.sh@958 -- # kill -0 77133 00:26:02.976 13:20:09 ftl -- common/autotest_common.sh@959 -- # uname 00:26:02.976 13:20:09 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.976 13:20:09 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77133 00:26:02.976 killing process with pid 77133 00:26:02.976 13:20:09 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:02.976 13:20:09 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:02.976 13:20:09 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77133' 00:26:02.976 13:20:09 ftl -- common/autotest_common.sh@973 -- # kill 77133 00:26:02.976 13:20:09 ftl -- common/autotest_common.sh@978 -- # wait 77133 00:26:05.507 13:20:11 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:26:05.507 13:20:11 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:26:05.507 13:20:11 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:05.507 13:20:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.507 13:20:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:05.507 ************************************ 00:26:05.507 START TEST ftl_fio_basic 00:26:05.507 ************************************ 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:26:05.507 * Looking for test storage... 00:26:05.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:26:05.507 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:05.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.508 --rc genhtml_branch_coverage=1 00:26:05.508 --rc genhtml_function_coverage=1 00:26:05.508 --rc genhtml_legend=1 00:26:05.508 --rc geninfo_all_blocks=1 00:26:05.508 --rc geninfo_unexecuted_blocks=1 00:26:05.508 00:26:05.508 ' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:05.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.508 --rc genhtml_branch_coverage=1 00:26:05.508 --rc genhtml_function_coverage=1 00:26:05.508 --rc genhtml_legend=1 00:26:05.508 --rc geninfo_all_blocks=1 00:26:05.508 --rc geninfo_unexecuted_blocks=1 00:26:05.508 00:26:05.508 ' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:05.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.508 --rc genhtml_branch_coverage=1 00:26:05.508 --rc genhtml_function_coverage=1 00:26:05.508 --rc genhtml_legend=1 00:26:05.508 --rc geninfo_all_blocks=1 00:26:05.508 --rc geninfo_unexecuted_blocks=1 00:26:05.508 00:26:05.508 ' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:05.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.508 --rc genhtml_branch_coverage=1 00:26:05.508 --rc genhtml_function_coverage=1 00:26:05.508 --rc genhtml_legend=1 00:26:05.508 --rc geninfo_all_blocks=1 00:26:05.508 --rc geninfo_unexecuted_blocks=1 00:26:05.508 00:26:05.508 ' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77282 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77282 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77282 ']' 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.508 13:20:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:05.508 [2024-12-06 13:20:11.919771] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:05.508 [2024-12-06 13:20:11.919988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77282 ] 00:26:05.766 [2024-12-06 13:20:12.109968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:05.766 [2024-12-06 13:20:12.239596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.766 [2024-12-06 13:20:12.239713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.766 [2024-12-06 13:20:12.239718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.704 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.704 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:26:06.704 13:20:13 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:06.704 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:26:06.704 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:06.704 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:26:06.704 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:26:06.704 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:06.963 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:06.963 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:26:06.963 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:06.963 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:06.963 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:06.963 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:26:06.963 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:26:06.963 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:07.222 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:07.222 { 00:26:07.222 "name": "nvme0n1", 00:26:07.222 "aliases": [ 00:26:07.222 "c9927c0f-9bf8-48e9-95f9-4afa67cd3d9b" 00:26:07.222 ], 00:26:07.222 "product_name": "NVMe disk", 00:26:07.222 "block_size": 4096, 00:26:07.222 "num_blocks": 1310720, 00:26:07.222 "uuid": "c9927c0f-9bf8-48e9-95f9-4afa67cd3d9b", 00:26:07.222 "numa_id": -1, 00:26:07.222 "assigned_rate_limits": { 00:26:07.222 "rw_ios_per_sec": 0, 00:26:07.222 "rw_mbytes_per_sec": 0, 00:26:07.222 "r_mbytes_per_sec": 0, 00:26:07.222 "w_mbytes_per_sec": 0 00:26:07.222 }, 00:26:07.222 "claimed": false, 00:26:07.222 "zoned": false, 00:26:07.222 "supported_io_types": { 00:26:07.222 "read": true, 00:26:07.222 "write": true, 00:26:07.222 "unmap": true, 00:26:07.222 "flush": true, 00:26:07.222 "reset": true, 00:26:07.222 "nvme_admin": true, 00:26:07.222 "nvme_io": true, 00:26:07.222 "nvme_io_md": false, 00:26:07.222 "write_zeroes": true, 00:26:07.222 "zcopy": false, 00:26:07.222 "get_zone_info": false, 00:26:07.222 "zone_management": false, 00:26:07.222 "zone_append": false, 00:26:07.222 "compare": true, 00:26:07.222 "compare_and_write": false, 00:26:07.222 "abort": true, 00:26:07.222 "seek_hole": false, 00:26:07.222 "seek_data": false, 00:26:07.222 "copy": true, 00:26:07.222 "nvme_iov_md": false 00:26:07.222 }, 00:26:07.222 "driver_specific": { 00:26:07.222 "nvme": [ 00:26:07.222 { 00:26:07.222 "pci_address": "0000:00:11.0", 00:26:07.222 "trid": { 00:26:07.222 "trtype": "PCIe", 00:26:07.222 "traddr": "0000:00:11.0" 00:26:07.222 }, 00:26:07.222 "ctrlr_data": { 00:26:07.222 "cntlid": 0, 00:26:07.222 "vendor_id": "0x1b36", 00:26:07.222 "model_number": "QEMU NVMe Ctrl", 00:26:07.222 "serial_number": "12341", 00:26:07.222 "firmware_revision": "8.0.0", 00:26:07.222 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:07.222 "oacs": { 00:26:07.222 "security": 0, 00:26:07.222 "format": 1, 00:26:07.222 "firmware": 0, 00:26:07.222 "ns_manage": 1 00:26:07.222 }, 00:26:07.222 "multi_ctrlr": false, 00:26:07.222 "ana_reporting": false 00:26:07.222 }, 00:26:07.222 "vs": { 00:26:07.222 "nvme_version": "1.4" 00:26:07.222 }, 00:26:07.222 "ns_data": { 00:26:07.222 "id": 1, 00:26:07.222 "can_share": false 00:26:07.222 } 00:26:07.222 } 00:26:07.222 ], 00:26:07.222 "mp_policy": "active_passive" 00:26:07.222 } 00:26:07.222 } 00:26:07.222 ]' 00:26:07.222 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:07.222 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:26:07.222 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:07.480 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:07.480 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:07.480 13:20:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:26:07.480 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:26:07.480 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:07.480 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:26:07.480 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:07.480 13:20:13 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:07.739 13:20:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:26:07.739 13:20:14 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:07.998 13:20:14 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=89c098a9-f0b4-4b99-9f49-a4e4201d5a40 00:26:07.998 13:20:14 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 89c098a9-f0b4-4b99-9f49-a4e4201d5a40 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:26:08.256 13:20:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:08.515 13:20:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:08.515 { 00:26:08.515 "name": "6c6b5c21-891a-41c9-8d7d-31e5325c23cc", 00:26:08.515 "aliases": [ 00:26:08.515 "lvs/nvme0n1p0" 00:26:08.515 ], 00:26:08.515 "product_name": "Logical Volume", 00:26:08.515 "block_size": 4096, 00:26:08.515 "num_blocks": 26476544, 00:26:08.515 "uuid": "6c6b5c21-891a-41c9-8d7d-31e5325c23cc", 00:26:08.515 "assigned_rate_limits": { 00:26:08.515 "rw_ios_per_sec": 0, 00:26:08.515 "rw_mbytes_per_sec": 0, 00:26:08.515 "r_mbytes_per_sec": 0, 00:26:08.515 "w_mbytes_per_sec": 0 00:26:08.515 }, 00:26:08.515 "claimed": false, 00:26:08.515 "zoned": false, 00:26:08.515 "supported_io_types": { 00:26:08.515 "read": true, 00:26:08.515 "write": true, 00:26:08.515 "unmap": true, 00:26:08.515 "flush": false, 00:26:08.515 "reset": true, 00:26:08.515 "nvme_admin": false, 00:26:08.515 "nvme_io": false, 00:26:08.515 "nvme_io_md": false, 00:26:08.515 "write_zeroes": true, 00:26:08.515 "zcopy": false, 00:26:08.515 "get_zone_info": false, 00:26:08.515 "zone_management": false, 00:26:08.515 "zone_append": false, 00:26:08.515 "compare": false, 00:26:08.515 "compare_and_write": false, 00:26:08.515 "abort": false, 00:26:08.515 "seek_hole": true, 00:26:08.515 "seek_data": true, 00:26:08.515 "copy": false, 00:26:08.515 "nvme_iov_md": false 00:26:08.515 }, 00:26:08.515 "driver_specific": { 00:26:08.515 "lvol": { 00:26:08.515 "lvol_store_uuid": "89c098a9-f0b4-4b99-9f49-a4e4201d5a40", 00:26:08.515 "base_bdev": "nvme0n1", 00:26:08.515 "thin_provision": true, 00:26:08.515 "num_allocated_clusters": 0, 00:26:08.515 "snapshot": false, 00:26:08.515 "clone": false, 00:26:08.515 "esnap_clone": false 00:26:08.515 } 00:26:08.515 } 00:26:08.515 } 00:26:08.515 ]' 00:26:08.515 13:20:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:08.515 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:26:08.515 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:08.774 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:08.774 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:08.774 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:26:08.774 13:20:15 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:26:08.774 13:20:15 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:26:08.774 13:20:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:09.032 13:20:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:09.032 13:20:15 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:09.032 13:20:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:09.032 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:09.032 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:09.032 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:26:09.032 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:26:09.032 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:09.289 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:09.289 { 00:26:09.290 "name": "6c6b5c21-891a-41c9-8d7d-31e5325c23cc", 00:26:09.290 "aliases": [ 00:26:09.290 "lvs/nvme0n1p0" 00:26:09.290 ], 00:26:09.290 "product_name": "Logical Volume", 00:26:09.290 "block_size": 4096, 00:26:09.290 "num_blocks": 26476544, 00:26:09.290 "uuid": "6c6b5c21-891a-41c9-8d7d-31e5325c23cc", 00:26:09.290 "assigned_rate_limits": { 00:26:09.290 "rw_ios_per_sec": 0, 00:26:09.290 "rw_mbytes_per_sec": 0, 00:26:09.290 "r_mbytes_per_sec": 0, 00:26:09.290 "w_mbytes_per_sec": 0 00:26:09.290 }, 00:26:09.290 "claimed": false, 00:26:09.290 "zoned": false, 00:26:09.290 "supported_io_types": { 00:26:09.290 "read": true, 00:26:09.290 "write": true, 00:26:09.290 "unmap": true, 00:26:09.290 "flush": false, 00:26:09.290 "reset": true, 00:26:09.290 "nvme_admin": false, 00:26:09.290 "nvme_io": false, 00:26:09.290 "nvme_io_md": false, 00:26:09.290 "write_zeroes": true, 00:26:09.290 "zcopy": false, 00:26:09.290 "get_zone_info": false, 00:26:09.290 "zone_management": false, 00:26:09.290 "zone_append": false, 00:26:09.290 "compare": false, 00:26:09.290 "compare_and_write": false, 00:26:09.290 "abort": false, 00:26:09.290 "seek_hole": true, 00:26:09.290 "seek_data": true, 00:26:09.290 "copy": false, 00:26:09.290 "nvme_iov_md": false 00:26:09.290 }, 00:26:09.290 "driver_specific": { 00:26:09.290 "lvol": { 00:26:09.290 "lvol_store_uuid": "89c098a9-f0b4-4b99-9f49-a4e4201d5a40", 00:26:09.290 "base_bdev": "nvme0n1", 00:26:09.290 "thin_provision": true, 00:26:09.290 "num_allocated_clusters": 0, 00:26:09.290 "snapshot": false, 00:26:09.290 "clone": false, 00:26:09.290 "esnap_clone": false 00:26:09.290 } 00:26:09.290 } 00:26:09.290 } 00:26:09.290 ]' 00:26:09.290 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:09.290 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:26:09.290 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:09.290 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:09.290 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:09.290 13:20:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:26:09.290 13:20:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:26:09.290 13:20:15 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:09.548 13:20:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:26:09.548 13:20:16 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:26:09.548 13:20:16 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:26:09.548 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:26:09.548 13:20:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:09.548 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:09.548 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:09.548 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:26:09.548 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:26:09.548 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c6b5c21-891a-41c9-8d7d-31e5325c23cc 00:26:10.115 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:10.115 { 00:26:10.115 "name": "6c6b5c21-891a-41c9-8d7d-31e5325c23cc", 00:26:10.115 "aliases": [ 00:26:10.115 "lvs/nvme0n1p0" 00:26:10.115 ], 00:26:10.115 "product_name": "Logical Volume", 00:26:10.115 "block_size": 4096, 00:26:10.115 "num_blocks": 26476544, 00:26:10.115 "uuid": "6c6b5c21-891a-41c9-8d7d-31e5325c23cc", 00:26:10.115 "assigned_rate_limits": { 00:26:10.115 "rw_ios_per_sec": 0, 00:26:10.115 "rw_mbytes_per_sec": 0, 00:26:10.115 "r_mbytes_per_sec": 0, 00:26:10.115 "w_mbytes_per_sec": 0 00:26:10.115 }, 00:26:10.115 "claimed": false, 00:26:10.115 "zoned": false, 00:26:10.116 "supported_io_types": { 00:26:10.116 "read": true, 00:26:10.116 "write": true, 00:26:10.116 "unmap": true, 00:26:10.116 "flush": false, 00:26:10.116 "reset": true, 00:26:10.116 "nvme_admin": false, 00:26:10.116 "nvme_io": false, 00:26:10.116 "nvme_io_md": false, 00:26:10.116 "write_zeroes": true, 00:26:10.116 "zcopy": false, 00:26:10.116 "get_zone_info": false, 00:26:10.116 "zone_management": false, 00:26:10.116 "zone_append": false, 00:26:10.116 "compare": false, 00:26:10.116 "compare_and_write": false, 00:26:10.116 "abort": false, 00:26:10.116 "seek_hole": true, 00:26:10.116 "seek_data": true, 00:26:10.116 "copy": false, 00:26:10.116 "nvme_iov_md": false 00:26:10.116 }, 00:26:10.116 "driver_specific": { 00:26:10.116 "lvol": { 00:26:10.116 "lvol_store_uuid": "89c098a9-f0b4-4b99-9f49-a4e4201d5a40", 00:26:10.116 "base_bdev": "nvme0n1", 00:26:10.116 "thin_provision": true, 00:26:10.116 "num_allocated_clusters": 0, 00:26:10.116 "snapshot": false, 00:26:10.116 "clone": false, 00:26:10.116 "esnap_clone": false 00:26:10.116 } 00:26:10.116 } 00:26:10.116 } 00:26:10.116 ]' 00:26:10.116 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:10.116 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:26:10.116 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:10.116 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:10.116 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:10.116 13:20:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:26:10.116 13:20:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:26:10.116 13:20:16 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:26:10.116 13:20:16 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6c6b5c21-891a-41c9-8d7d-31e5325c23cc -c nvc0n1p0 --l2p_dram_limit 60 00:26:10.374 [2024-12-06 13:20:16.744775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.374 [2024-12-06 13:20:16.745057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:10.374 [2024-12-06 13:20:16.745200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:10.375 [2024-12-06 13:20:16.745333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.745491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.375 [2024-12-06 13:20:16.745551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:10.375 [2024-12-06 13:20:16.745670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:10.375 [2024-12-06 13:20:16.745722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.745891] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:10.375 [2024-12-06 13:20:16.746942] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:10.375 [2024-12-06 13:20:16.747134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.375 [2024-12-06 13:20:16.747253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:10.375 [2024-12-06 13:20:16.747285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:26:10.375 [2024-12-06 13:20:16.747300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.747441] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 20a4ded1-2b49-4fa2-a080-f8b6df5c3ff3 00:26:10.375 [2024-12-06 13:20:16.748623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.375 [2024-12-06 13:20:16.748674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:10.375 [2024-12-06 13:20:16.748693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:10.375 [2024-12-06 13:20:16.748707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.753618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.375 [2024-12-06 13:20:16.753852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:10.375 [2024-12-06 13:20:16.753883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.798 ms 00:26:10.375 [2024-12-06 13:20:16.753900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.754058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.375 [2024-12-06 13:20:16.754083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:10.375 [2024-12-06 13:20:16.754097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:26:10.375 [2024-12-06 13:20:16.754124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.754221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.375 [2024-12-06 13:20:16.754244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:10.375 [2024-12-06 13:20:16.754259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:10.375 [2024-12-06 13:20:16.754273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.754314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:10.375 [2024-12-06 13:20:16.758929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.375 [2024-12-06 13:20:16.758970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:10.375 [2024-12-06 13:20:16.758993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.622 ms 00:26:10.375 [2024-12-06 13:20:16.759009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.759065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.375 [2024-12-06 13:20:16.759081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:10.375 [2024-12-06 13:20:16.759096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:10.375 [2024-12-06 13:20:16.759108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.759196] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:10.375 [2024-12-06 13:20:16.759400] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:10.375 [2024-12-06 13:20:16.759430] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:10.375 [2024-12-06 13:20:16.759448] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:10.375 [2024-12-06 13:20:16.759477] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:10.375 [2024-12-06 13:20:16.759494] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:10.375 [2024-12-06 13:20:16.759511] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:10.375 [2024-12-06 13:20:16.759524] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:10.375 [2024-12-06 13:20:16.759537] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:10.375 [2024-12-06 13:20:16.759548] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:10.375 [2024-12-06 13:20:16.759564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.375 [2024-12-06 13:20:16.759578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:10.375 [2024-12-06 13:20:16.759593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:26:10.375 [2024-12-06 13:20:16.759605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.759722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.375 [2024-12-06 13:20:16.759738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:10.375 [2024-12-06 13:20:16.759753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:10.375 [2024-12-06 13:20:16.759765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.375 [2024-12-06 13:20:16.759947] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:10.375 [2024-12-06 13:20:16.759968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:10.375 [2024-12-06 13:20:16.759992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.375 [2024-12-06 13:20:16.760004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:10.375 [2024-12-06 13:20:16.760029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:10.375 [2024-12-06 13:20:16.760056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:10.375 [2024-12-06 13:20:16.760069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.375 [2024-12-06 13:20:16.760094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:10.375 [2024-12-06 13:20:16.760104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:10.375 [2024-12-06 13:20:16.760117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.375 [2024-12-06 13:20:16.760127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:10.375 [2024-12-06 13:20:16.760140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:10.375 [2024-12-06 13:20:16.760151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:10.375 [2024-12-06 13:20:16.760177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:10.375 [2024-12-06 13:20:16.760190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:10.375 [2024-12-06 13:20:16.760217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.375 [2024-12-06 13:20:16.760242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:10.375 [2024-12-06 13:20:16.760253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.375 [2024-12-06 13:20:16.760276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:10.375 [2024-12-06 13:20:16.760290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.375 [2024-12-06 13:20:16.760313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:10.375 [2024-12-06 13:20:16.760324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.375 [2024-12-06 13:20:16.760347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:10.375 [2024-12-06 13:20:16.760364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.375 [2024-12-06 13:20:16.760409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:10.375 [2024-12-06 13:20:16.760420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:10.375 [2024-12-06 13:20:16.760433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.375 [2024-12-06 13:20:16.760444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:10.375 [2024-12-06 13:20:16.760456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:10.375 [2024-12-06 13:20:16.760467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:10.375 [2024-12-06 13:20:16.760491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:10.375 [2024-12-06 13:20:16.760504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760514] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:10.375 [2024-12-06 13:20:16.760528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:10.375 [2024-12-06 13:20:16.760539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.375 [2024-12-06 13:20:16.760553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.375 [2024-12-06 13:20:16.760565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:10.375 [2024-12-06 13:20:16.760580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:10.375 [2024-12-06 13:20:16.760592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:10.376 [2024-12-06 13:20:16.760605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:10.376 [2024-12-06 13:20:16.760617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:10.376 [2024-12-06 13:20:16.760631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:10.376 [2024-12-06 13:20:16.760644] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:10.376 [2024-12-06 13:20:16.760661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.376 [2024-12-06 13:20:16.760675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:10.376 [2024-12-06 13:20:16.760689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:10.376 [2024-12-06 13:20:16.760701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:10.376 [2024-12-06 13:20:16.760717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:10.376 [2024-12-06 13:20:16.760729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:10.376 [2024-12-06 13:20:16.760743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:10.376 [2024-12-06 13:20:16.760755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:10.376 [2024-12-06 13:20:16.760768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:10.376 [2024-12-06 13:20:16.760780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:10.376 [2024-12-06 13:20:16.760796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:10.376 [2024-12-06 13:20:16.760808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:10.376 [2024-12-06 13:20:16.760822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:10.376 [2024-12-06 13:20:16.760833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:10.376 [2024-12-06 13:20:16.760863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:10.376 [2024-12-06 13:20:16.760877] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:10.376 [2024-12-06 13:20:16.760893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.376 [2024-12-06 13:20:16.760908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:10.376 [2024-12-06 13:20:16.760922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:10.376 [2024-12-06 13:20:16.760934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:10.376 [2024-12-06 13:20:16.760949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:10.376 [2024-12-06 13:20:16.760962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.376 [2024-12-06 13:20:16.760976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:10.376 [2024-12-06 13:20:16.760988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:26:10.376 [2024-12-06 13:20:16.761002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.376 [2024-12-06 13:20:16.761077] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:10.376 [2024-12-06 13:20:16.761102] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:13.657 [2024-12-06 13:20:20.061185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.658 [2024-12-06 13:20:20.061274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:13.658 [2024-12-06 13:20:20.061298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3300.128 ms 00:26:13.658 [2024-12-06 13:20:20.061314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.658 [2024-12-06 13:20:20.094519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.658 [2024-12-06 13:20:20.094791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:13.658 [2024-12-06 13:20:20.094825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.911 ms 00:26:13.658 [2024-12-06 13:20:20.094867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.658 [2024-12-06 13:20:20.095064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.658 [2024-12-06 13:20:20.095089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:13.658 [2024-12-06 13:20:20.095104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:13.658 [2024-12-06 13:20:20.095121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.658 [2024-12-06 13:20:20.150274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.658 [2024-12-06 13:20:20.150598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:13.658 [2024-12-06 13:20:20.150639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.086 ms 00:26:13.658 [2024-12-06 13:20:20.150659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.658 [2024-12-06 13:20:20.150732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.658 [2024-12-06 13:20:20.150752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:13.658 [2024-12-06 13:20:20.150766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:13.658 [2024-12-06 13:20:20.150781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.658 [2024-12-06 13:20:20.151258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.658 [2024-12-06 13:20:20.151283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:13.658 [2024-12-06 13:20:20.151298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:26:13.658 [2024-12-06 13:20:20.151315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.658 [2024-12-06 13:20:20.151498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.658 [2024-12-06 13:20:20.151522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:13.658 [2024-12-06 13:20:20.151536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:26:13.658 [2024-12-06 13:20:20.151553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.658 [2024-12-06 13:20:20.169855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.658 [2024-12-06 13:20:20.169928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:13.658 [2024-12-06 13:20:20.169950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.268 ms 00:26:13.658 [2024-12-06 13:20:20.169965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.658 [2024-12-06 13:20:20.183371] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:13.915 [2024-12-06 13:20:20.197488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.915 [2024-12-06 13:20:20.197796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:13.915 [2024-12-06 13:20:20.197864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.357 ms 00:26:13.915 [2024-12-06 13:20:20.197882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.915 [2024-12-06 13:20:20.259377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.915 [2024-12-06 13:20:20.259451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:13.915 [2024-12-06 13:20:20.259504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.418 ms 00:26:13.915 [2024-12-06 13:20:20.259518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.915 [2024-12-06 13:20:20.259773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.915 [2024-12-06 13:20:20.259799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:13.915 [2024-12-06 13:20:20.259820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:26:13.915 [2024-12-06 13:20:20.259832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.915 [2024-12-06 13:20:20.291602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.915 [2024-12-06 13:20:20.291654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:13.915 [2024-12-06 13:20:20.291677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.649 ms 00:26:13.915 [2024-12-06 13:20:20.291691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.916 [2024-12-06 13:20:20.323083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.916 [2024-12-06 13:20:20.323316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:13.916 [2024-12-06 13:20:20.323356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.324 ms 00:26:13.916 [2024-12-06 13:20:20.323371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.916 [2024-12-06 13:20:20.324193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.916 [2024-12-06 13:20:20.324225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:13.916 [2024-12-06 13:20:20.324244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:26:13.916 [2024-12-06 13:20:20.324256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.916 [2024-12-06 13:20:20.419721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.916 [2024-12-06 13:20:20.419789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:13.916 [2024-12-06 13:20:20.419819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.369 ms 00:26:13.916 [2024-12-06 13:20:20.419836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.173 [2024-12-06 13:20:20.452508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.173 [2024-12-06 13:20:20.452559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:14.173 [2024-12-06 13:20:20.452583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.513 ms 00:26:14.173 [2024-12-06 13:20:20.452597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.173 [2024-12-06 13:20:20.484268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.173 [2024-12-06 13:20:20.484314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:14.173 [2024-12-06 13:20:20.484335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.610 ms 00:26:14.173 [2024-12-06 13:20:20.484348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.173 [2024-12-06 13:20:20.516224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.173 [2024-12-06 13:20:20.516286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:14.173 [2024-12-06 13:20:20.516312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.818 ms 00:26:14.173 [2024-12-06 13:20:20.516324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.173 [2024-12-06 13:20:20.516392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.173 [2024-12-06 13:20:20.516411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:14.173 [2024-12-06 13:20:20.516433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:14.173 [2024-12-06 13:20:20.516446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.173 [2024-12-06 13:20:20.516600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.173 [2024-12-06 13:20:20.516621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:14.173 [2024-12-06 13:20:20.516637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:14.173 [2024-12-06 13:20:20.516649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.173 [2024-12-06 13:20:20.517805] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3772.546 ms, result 0 00:26:14.173 { 00:26:14.173 "name": "ftl0", 00:26:14.173 "uuid": "20a4ded1-2b49-4fa2-a080-f8b6df5c3ff3" 00:26:14.173 } 00:26:14.173 13:20:20 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:26:14.173 13:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:26:14.173 13:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:14.173 13:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:26:14.173 13:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:14.173 13:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:14.173 13:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:14.442 13:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:26:14.700 [ 00:26:14.700 { 00:26:14.700 "name": "ftl0", 00:26:14.700 "aliases": [ 00:26:14.700 "20a4ded1-2b49-4fa2-a080-f8b6df5c3ff3" 00:26:14.700 ], 00:26:14.700 "product_name": "FTL disk", 00:26:14.700 "block_size": 4096, 00:26:14.700 "num_blocks": 20971520, 00:26:14.700 "uuid": "20a4ded1-2b49-4fa2-a080-f8b6df5c3ff3", 00:26:14.700 "assigned_rate_limits": { 00:26:14.700 "rw_ios_per_sec": 0, 00:26:14.700 "rw_mbytes_per_sec": 0, 00:26:14.700 "r_mbytes_per_sec": 0, 00:26:14.700 "w_mbytes_per_sec": 0 00:26:14.700 }, 00:26:14.700 "claimed": false, 00:26:14.700 "zoned": false, 00:26:14.700 "supported_io_types": { 00:26:14.700 "read": true, 00:26:14.700 "write": true, 00:26:14.700 "unmap": true, 00:26:14.700 "flush": true, 00:26:14.700 "reset": false, 00:26:14.700 "nvme_admin": false, 00:26:14.700 "nvme_io": false, 00:26:14.700 "nvme_io_md": false, 00:26:14.700 "write_zeroes": true, 00:26:14.700 "zcopy": false, 00:26:14.700 "get_zone_info": false, 00:26:14.700 "zone_management": false, 00:26:14.700 "zone_append": false, 00:26:14.700 "compare": false, 00:26:14.700 "compare_and_write": false, 00:26:14.700 "abort": false, 00:26:14.700 "seek_hole": false, 00:26:14.700 "seek_data": false, 00:26:14.700 "copy": false, 00:26:14.700 "nvme_iov_md": false 00:26:14.700 }, 00:26:14.700 "driver_specific": { 00:26:14.700 "ftl": { 00:26:14.700 "base_bdev": "6c6b5c21-891a-41c9-8d7d-31e5325c23cc", 00:26:14.700 "cache": "nvc0n1p0" 00:26:14.700 } 00:26:14.700 } 00:26:14.700 } 00:26:14.700 ] 00:26:14.700 13:20:21 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:26:14.700 13:20:21 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:26:14.700 13:20:21 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:14.957 13:20:21 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:26:14.957 13:20:21 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:15.214 [2024-12-06 13:20:21.663252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.214 [2024-12-06 13:20:21.663328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:15.214 [2024-12-06 13:20:21.663351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:15.214 [2024-12-06 13:20:21.663367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.214 [2024-12-06 13:20:21.663413] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:15.214 [2024-12-06 13:20:21.666787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.214 [2024-12-06 13:20:21.666825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:15.214 [2024-12-06 13:20:21.666856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.343 ms 00:26:15.214 [2024-12-06 13:20:21.666872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.214 [2024-12-06 13:20:21.667358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.214 [2024-12-06 13:20:21.667385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:15.214 [2024-12-06 13:20:21.667402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:26:15.214 [2024-12-06 13:20:21.667414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.214 [2024-12-06 13:20:21.670743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.214 [2024-12-06 13:20:21.670781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:15.214 [2024-12-06 13:20:21.670800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.262 ms 00:26:15.214 [2024-12-06 13:20:21.670812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.214 [2024-12-06 13:20:21.677549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.214 [2024-12-06 13:20:21.677719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:15.215 [2024-12-06 13:20:21.677754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.680 ms 00:26:15.215 [2024-12-06 13:20:21.677768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.215 [2024-12-06 13:20:21.709221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.215 [2024-12-06 13:20:21.709266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:15.215 [2024-12-06 13:20:21.709309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.315 ms 00:26:15.215 [2024-12-06 13:20:21.709322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.215 [2024-12-06 13:20:21.728163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.215 [2024-12-06 13:20:21.728336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:15.215 [2024-12-06 13:20:21.728378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.776 ms 00:26:15.215 [2024-12-06 13:20:21.728392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.215 [2024-12-06 13:20:21.728675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.215 [2024-12-06 13:20:21.728699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:15.215 [2024-12-06 13:20:21.728716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:26:15.215 [2024-12-06 13:20:21.728729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.472 [2024-12-06 13:20:21.760416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.472 [2024-12-06 13:20:21.760466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:15.472 [2024-12-06 13:20:21.760489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.627 ms 00:26:15.472 [2024-12-06 13:20:21.760502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.472 [2024-12-06 13:20:21.791807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.472 [2024-12-06 13:20:21.792006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:15.472 [2024-12-06 13:20:21.792043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.242 ms 00:26:15.472 [2024-12-06 13:20:21.792057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.472 [2024-12-06 13:20:21.822897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.472 [2024-12-06 13:20:21.823064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:15.472 [2024-12-06 13:20:21.823100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.753 ms 00:26:15.472 [2024-12-06 13:20:21.823115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.472 [2024-12-06 13:20:21.854022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.472 [2024-12-06 13:20:21.854193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:15.472 [2024-12-06 13:20:21.854229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.762 ms 00:26:15.472 [2024-12-06 13:20:21.854243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.472 [2024-12-06 13:20:21.854305] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:15.472 [2024-12-06 13:20:21.854329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.854994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:15.473 [2024-12-06 13:20:21.855584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:15.474 [2024-12-06 13:20:21.855836] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:15.474 [2024-12-06 13:20:21.855864] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 20a4ded1-2b49-4fa2-a080-f8b6df5c3ff3 00:26:15.474 [2024-12-06 13:20:21.855877] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:15.474 [2024-12-06 13:20:21.855893] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:15.474 [2024-12-06 13:20:21.855904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:15.474 [2024-12-06 13:20:21.855921] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:15.474 [2024-12-06 13:20:21.855932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:15.474 [2024-12-06 13:20:21.855946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:15.474 [2024-12-06 13:20:21.855958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:15.474 [2024-12-06 13:20:21.855970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:15.474 [2024-12-06 13:20:21.855980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:15.474 [2024-12-06 13:20:21.855995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.474 [2024-12-06 13:20:21.856007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:15.474 [2024-12-06 13:20:21.856023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:26:15.474 [2024-12-06 13:20:21.856035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.474 [2024-12-06 13:20:21.873104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.474 [2024-12-06 13:20:21.873263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:15.474 [2024-12-06 13:20:21.873409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.987 ms 00:26:15.474 [2024-12-06 13:20:21.873465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.474 [2024-12-06 13:20:21.874076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.474 [2024-12-06 13:20:21.874230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:15.474 [2024-12-06 13:20:21.874349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:26:15.474 [2024-12-06 13:20:21.874484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.474 [2024-12-06 13:20:21.932911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.474 [2024-12-06 13:20:21.933109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:15.474 [2024-12-06 13:20:21.933234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.474 [2024-12-06 13:20:21.933287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.474 [2024-12-06 13:20:21.933409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.474 [2024-12-06 13:20:21.933457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:15.474 [2024-12-06 13:20:21.933563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.474 [2024-12-06 13:20:21.933615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.474 [2024-12-06 13:20:21.933829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.474 [2024-12-06 13:20:21.933907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:15.474 [2024-12-06 13:20:21.934062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.474 [2024-12-06 13:20:21.934115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.474 [2024-12-06 13:20:21.934190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.474 [2024-12-06 13:20:21.934344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:15.474 [2024-12-06 13:20:21.934401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.474 [2024-12-06 13:20:21.934444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.732 [2024-12-06 13:20:22.044578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.732 [2024-12-06 13:20:22.044776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:15.732 [2024-12-06 13:20:22.044926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.732 [2024-12-06 13:20:22.044981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.732 [2024-12-06 13:20:22.129487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.732 [2024-12-06 13:20:22.129689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:15.732 [2024-12-06 13:20:22.129819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.732 [2024-12-06 13:20:22.129900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.732 [2024-12-06 13:20:22.130077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.732 [2024-12-06 13:20:22.130135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:15.732 [2024-12-06 13:20:22.130268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.732 [2024-12-06 13:20:22.130321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.732 [2024-12-06 13:20:22.130431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.732 [2024-12-06 13:20:22.130450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:15.732 [2024-12-06 13:20:22.130466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.732 [2024-12-06 13:20:22.130479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.732 [2024-12-06 13:20:22.130640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.732 [2024-12-06 13:20:22.130660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:15.732 [2024-12-06 13:20:22.130676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.732 [2024-12-06 13:20:22.130690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.732 [2024-12-06 13:20:22.130775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.732 [2024-12-06 13:20:22.130795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:15.732 [2024-12-06 13:20:22.130810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.732 [2024-12-06 13:20:22.130822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.732 [2024-12-06 13:20:22.130898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.732 [2024-12-06 13:20:22.130917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:15.732 [2024-12-06 13:20:22.130932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.732 [2024-12-06 13:20:22.130946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.732 [2024-12-06 13:20:22.131017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.732 [2024-12-06 13:20:22.131036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:15.732 [2024-12-06 13:20:22.131052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.732 [2024-12-06 13:20:22.131063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.732 [2024-12-06 13:20:22.131261] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 467.976 ms, result 0 00:26:15.732 true 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77282 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77282 ']' 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77282 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77282 00:26:15.732 killing process with pid 77282 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77282' 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77282 00:26:15.732 13:20:22 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77282 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:20.998 13:20:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:26:20.998 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:26:20.998 fio-3.35 00:26:20.998 Starting 1 thread 00:26:26.316 00:26:26.316 test: (groupid=0, jobs=1): err= 0: pid=77495: Fri Dec 6 13:20:32 2024 00:26:26.316 read: IOPS=960, BW=63.8MiB/s (66.9MB/s)(255MiB/3990msec) 00:26:26.316 slat (nsec): min=5896, max=34890, avg=7764.14, stdev=3234.70 00:26:26.316 clat (usec): min=319, max=2732, avg=465.71, stdev=80.44 00:26:26.316 lat (usec): min=332, max=2739, avg=473.48, stdev=80.80 00:26:26.316 clat percentiles (usec): 00:26:26.316 | 1.00th=[ 363], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 420], 00:26:26.316 | 30.00th=[ 445], 40.00th=[ 449], 50.00th=[ 453], 60.00th=[ 461], 00:26:26.316 | 70.00th=[ 478], 80.00th=[ 515], 90.00th=[ 537], 95.00th=[ 570], 00:26:26.316 | 99.00th=[ 685], 99.50th=[ 783], 99.90th=[ 1336], 99.95th=[ 1663], 00:26:26.316 | 99.99th=[ 2737] 00:26:26.316 write: IOPS=967, BW=64.3MiB/s (67.4MB/s)(256MiB/3985msec); 0 zone resets 00:26:26.316 slat (nsec): min=20299, max=99108, avg=24977.36, stdev=5731.92 00:26:26.316 clat (usec): min=345, max=1603, avg=524.95, stdev=89.09 00:26:26.316 lat (usec): min=374, max=1625, avg=549.93, stdev=88.99 00:26:26.316 clat percentiles (usec): 00:26:26.316 | 1.00th=[ 400], 5.00th=[ 420], 10.00th=[ 457], 20.00th=[ 474], 00:26:26.316 | 30.00th=[ 478], 40.00th=[ 490], 50.00th=[ 510], 60.00th=[ 537], 00:26:26.316 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 603], 95.00th=[ 635], 00:26:26.316 | 99.00th=[ 938], 99.50th=[ 1004], 99.90th=[ 1221], 99.95th=[ 1467], 00:26:26.316 | 99.99th=[ 1598] 00:26:26.316 bw ( KiB/s): min=60384, max=68136, per=100.00%, avg=65940.57, stdev=2625.93, samples=7 00:26:26.316 iops : min= 888, max= 1002, avg=969.71, stdev=38.62, samples=7 00:26:26.316 lat (usec) : 500=61.45%, 750=36.97%, 1000=1.24% 00:26:26.316 lat (msec) : 2=0.33%, 4=0.01% 00:26:26.316 cpu : usr=99.10%, sys=0.10%, ctx=30, majf=0, minf=1169 00:26:26.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.316 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:26.316 00:26:26.316 Run status group 0 (all jobs): 00:26:26.316 READ: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s), io=255MiB (267MB), run=3990-3990msec 00:26:26.316 WRITE: bw=64.3MiB/s (67.4MB/s), 64.3MiB/s-64.3MiB/s (67.4MB/s-67.4MB/s), io=256MiB (269MB), run=3985-3985msec 00:26:27.692 ----------------------------------------------------- 00:26:27.692 Suppressions used: 00:26:27.692 count bytes template 00:26:27.692 1 5 /usr/src/fio/parse.c 00:26:27.692 1 8 libtcmalloc_minimal.so 00:26:27.692 1 904 libcrypto.so 00:26:27.692 ----------------------------------------------------- 00:26:27.692 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:27.692 13:20:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:27.692 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:27.692 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:27.692 fio-3.35 00:26:27.692 Starting 2 threads 00:27:06.422 00:27:06.422 first_half: (groupid=0, jobs=1): err= 0: pid=77598: Fri Dec 6 13:21:06 2024 00:27:06.422 read: IOPS=2102, BW=8410KiB/s (8611kB/s)(255MiB/31032msec) 00:27:06.422 slat (usec): min=4, max=103, avg= 7.57, stdev= 2.12 00:27:06.422 clat (usec): min=1009, max=326964, avg=45087.24, stdev=24548.80 00:27:06.422 lat (usec): min=1017, max=326971, avg=45094.81, stdev=24549.04 00:27:06.422 clat percentiles (msec): 00:27:06.422 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 40], 20.00th=[ 40], 00:27:06.422 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:27:06.422 | 70.00th=[ 44], 80.00th=[ 46], 90.00th=[ 52], 95.00th=[ 59], 00:27:06.422 | 99.00th=[ 188], 99.50th=[ 220], 99.90th=[ 279], 99.95th=[ 313], 00:27:06.422 | 99.99th=[ 317] 00:27:06.422 write: IOPS=2457, BW=9831KiB/s (10.1MB/s)(256MiB/26665msec); 0 zone resets 00:27:06.422 slat (usec): min=6, max=505, avg= 9.87, stdev= 5.76 00:27:06.422 clat (usec): min=515, max=131497, avg=15679.07, stdev=26190.65 00:27:06.422 lat (usec): min=526, max=131505, avg=15688.94, stdev=26190.83 00:27:06.422 clat percentiles (usec): 00:27:06.422 | 1.00th=[ 1020], 5.00th=[ 1336], 10.00th=[ 1582], 20.00th=[ 2040], 00:27:06.422 | 30.00th=[ 4228], 40.00th=[ 6063], 50.00th=[ 7046], 60.00th=[ 7963], 00:27:06.422 | 70.00th=[ 9634], 80.00th=[ 13960], 90.00th=[ 47449], 95.00th=[ 93848], 00:27:06.422 | 99.00th=[104334], 99.50th=[111674], 99.90th=[122160], 99.95th=[126354], 00:27:06.422 | 99.99th=[128451] 00:27:06.422 bw ( KiB/s): min= 864, max=40504, per=95.23%, avg=18724.57, stdev=10876.60, samples=28 00:27:06.422 iops : min= 216, max=10126, avg=4681.14, stdev=2719.15, samples=28 00:27:06.422 lat (usec) : 750=0.03%, 1000=0.39% 00:27:06.422 lat (msec) : 2=9.38%, 4=4.91%, 10=21.26%, 20=10.38%, 50=42.99% 00:27:06.422 lat (msec) : 100=8.17%, 250=2.38%, 500=0.09% 00:27:06.422 cpu : usr=99.03%, sys=0.16%, ctx=47, majf=0, minf=5562 00:27:06.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:06.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.422 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:06.422 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:06.422 second_half: (groupid=0, jobs=1): err= 0: pid=77599: Fri Dec 6 13:21:06 2024 00:27:06.422 read: IOPS=2114, BW=8459KiB/s (8662kB/s)(254MiB/30804msec) 00:27:06.422 slat (nsec): min=4967, max=66976, avg=7668.51, stdev=2027.41 00:27:06.422 clat (usec): min=779, max=334640, avg=46269.66, stdev=24660.88 00:27:06.422 lat (usec): min=788, max=334648, avg=46277.33, stdev=24661.17 00:27:06.422 clat percentiles (msec): 00:27:06.422 | 1.00th=[ 7], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:27:06.422 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:27:06.422 | 70.00th=[ 44], 80.00th=[ 47], 90.00th=[ 52], 95.00th=[ 64], 00:27:06.422 | 99.00th=[ 180], 99.50th=[ 213], 99.90th=[ 313], 99.95th=[ 313], 00:27:06.422 | 99.99th=[ 330] 00:27:06.422 write: IOPS=3236, BW=12.6MiB/s (13.3MB/s)(256MiB/20250msec); 0 zone resets 00:27:06.422 slat (usec): min=6, max=652, avg=10.01, stdev= 6.02 00:27:06.422 clat (usec): min=521, max=131632, avg=14129.66, stdev=25460.91 00:27:06.422 lat (usec): min=562, max=131640, avg=14139.66, stdev=25460.95 00:27:06.422 clat percentiles (usec): 00:27:06.422 | 1.00th=[ 1090], 5.00th=[ 1401], 10.00th=[ 1582], 20.00th=[ 1860], 00:27:06.422 | 30.00th=[ 2245], 40.00th=[ 4080], 50.00th=[ 6128], 60.00th=[ 7373], 00:27:06.422 | 70.00th=[ 8848], 80.00th=[ 13435], 90.00th=[ 18744], 95.00th=[ 91751], 00:27:06.422 | 99.00th=[103285], 99.50th=[109577], 99.90th=[126354], 99.95th=[128451], 00:27:06.422 | 99.99th=[130548] 00:27:06.422 bw ( KiB/s): min= 2912, max=39920, per=100.00%, avg=22797.74, stdev=8267.87, samples=23 00:27:06.422 iops : min= 728, max= 9980, avg=5699.43, stdev=2066.97, samples=23 00:27:06.422 lat (usec) : 750=0.03%, 1000=0.26% 00:27:06.422 lat (msec) : 2=12.21%, 4=7.66%, 10=17.20%, 20=8.84%, 50=42.83% 00:27:06.422 lat (msec) : 100=8.48%, 250=2.42%, 500=0.09% 00:27:06.422 cpu : usr=99.11%, sys=0.13%, ctx=81, majf=0, minf=5555 00:27:06.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:06.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.422 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:06.422 issued rwts: total=65142,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:06.422 00:27:06.422 Run status group 0 (all jobs): 00:27:06.422 READ: bw=16.4MiB/s (17.2MB/s), 8410KiB/s-8459KiB/s (8611kB/s-8662kB/s), io=509MiB (534MB), run=30804-31032msec 00:27:06.422 WRITE: bw=19.2MiB/s (20.1MB/s), 9831KiB/s-12.6MiB/s (10.1MB/s-13.3MB/s), io=512MiB (537MB), run=20250-26665msec 00:27:06.422 ----------------------------------------------------- 00:27:06.422 Suppressions used: 00:27:06.422 count bytes template 00:27:06.422 2 10 /usr/src/fio/parse.c 00:27:06.422 3 288 /usr/src/fio/iolog.c 00:27:06.422 1 8 libtcmalloc_minimal.so 00:27:06.422 1 904 libcrypto.so 00:27:06.422 ----------------------------------------------------- 00:27:06.422 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:06.422 13:21:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:27:06.422 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:27:06.422 fio-3.35 00:27:06.422 Starting 1 thread 00:27:21.295 00:27:21.295 test: (groupid=0, jobs=1): err= 0: pid=77978: Fri Dec 6 13:21:26 2024 00:27:21.295 read: IOPS=6330, BW=24.7MiB/s (25.9MB/s)(255MiB/10300msec) 00:27:21.295 slat (usec): min=4, max=188, avg= 6.87, stdev= 1.95 00:27:21.295 clat (usec): min=798, max=40328, avg=20208.91, stdev=1309.13 00:27:21.295 lat (usec): min=803, max=40335, avg=20215.77, stdev=1309.17 00:27:21.295 clat percentiles (usec): 00:27:21.295 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19268], 20.00th=[19530], 00:27:21.295 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20055], 00:27:21.295 | 70.00th=[20317], 80.00th=[20579], 90.00th=[21365], 95.00th=[22938], 00:27:21.295 | 99.00th=[24511], 99.50th=[26084], 99.90th=[29492], 99.95th=[34866], 00:27:21.295 | 99.99th=[39060] 00:27:21.295 write: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(256MiB/5626msec); 0 zone resets 00:27:21.295 slat (usec): min=6, max=418, avg= 9.47, stdev= 5.25 00:27:21.295 clat (usec): min=691, max=67107, avg=10929.43, stdev=13721.36 00:27:21.295 lat (usec): min=699, max=67117, avg=10938.90, stdev=13721.39 00:27:21.295 clat percentiles (usec): 00:27:21.295 | 1.00th=[ 988], 5.00th=[ 1188], 10.00th=[ 1319], 20.00th=[ 1516], 00:27:21.295 | 30.00th=[ 1713], 40.00th=[ 2147], 50.00th=[ 7111], 60.00th=[ 8356], 00:27:21.295 | 70.00th=[ 9503], 80.00th=[11207], 90.00th=[39584], 95.00th=[43779], 00:27:21.295 | 99.00th=[47449], 99.50th=[48497], 99.90th=[51119], 99.95th=[55837], 00:27:21.295 | 99.99th=[65274] 00:27:21.295 bw ( KiB/s): min= 8544, max=65016, per=93.77%, avg=43690.67, stdev=14156.58, samples=12 00:27:21.295 iops : min= 2136, max=16254, avg=10922.67, stdev=3539.16, samples=12 00:27:21.295 lat (usec) : 750=0.01%, 1000=0.56% 00:27:21.295 lat (msec) : 2=18.64%, 4=1.77%, 10=15.84%, 20=32.36%, 50=30.74% 00:27:21.295 lat (msec) : 100=0.09% 00:27:21.295 cpu : usr=98.91%, sys=0.24%, ctx=32, majf=0, minf=5565 00:27:21.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:21.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.295 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:21.295 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:21.295 00:27:21.295 Run status group 0 (all jobs): 00:27:21.295 READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=255MiB (267MB), run=10300-10300msec 00:27:21.295 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=256MiB (268MB), run=5626-5626msec 00:27:21.861 ----------------------------------------------------- 00:27:21.861 Suppressions used: 00:27:21.861 count bytes template 00:27:21.861 1 5 /usr/src/fio/parse.c 00:27:21.861 2 192 /usr/src/fio/iolog.c 00:27:21.861 1 8 libtcmalloc_minimal.so 00:27:21.861 1 904 libcrypto.so 00:27:21.861 ----------------------------------------------------- 00:27:21.861 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:21.861 Remove shared memory files 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58389 /dev/shm/spdk_tgt_trace.pid76202 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:27:21.861 ************************************ 00:27:21.861 END TEST ftl_fio_basic 00:27:21.861 ************************************ 00:27:21.861 00:27:21.861 real 1m16.729s 00:27:21.861 user 2m52.951s 00:27:21.861 sys 0m3.841s 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.861 13:21:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:21.861 13:21:28 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:27:21.861 13:21:28 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:21.861 13:21:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.861 13:21:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:21.861 ************************************ 00:27:21.861 START TEST ftl_bdevperf 00:27:21.861 ************************************ 00:27:21.861 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:27:22.120 * Looking for test storage... 00:27:22.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:22.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.120 --rc genhtml_branch_coverage=1 00:27:22.120 --rc genhtml_function_coverage=1 00:27:22.120 --rc genhtml_legend=1 00:27:22.120 --rc geninfo_all_blocks=1 00:27:22.120 --rc geninfo_unexecuted_blocks=1 00:27:22.120 00:27:22.120 ' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:22.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.120 --rc genhtml_branch_coverage=1 00:27:22.120 --rc genhtml_function_coverage=1 00:27:22.120 --rc genhtml_legend=1 00:27:22.120 --rc geninfo_all_blocks=1 00:27:22.120 --rc geninfo_unexecuted_blocks=1 00:27:22.120 00:27:22.120 ' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:22.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.120 --rc genhtml_branch_coverage=1 00:27:22.120 --rc genhtml_function_coverage=1 00:27:22.120 --rc genhtml_legend=1 00:27:22.120 --rc geninfo_all_blocks=1 00:27:22.120 --rc geninfo_unexecuted_blocks=1 00:27:22.120 00:27:22.120 ' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:22.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.120 --rc genhtml_branch_coverage=1 00:27:22.120 --rc genhtml_function_coverage=1 00:27:22.120 --rc genhtml_legend=1 00:27:22.120 --rc geninfo_all_blocks=1 00:27:22.120 --rc geninfo_unexecuted_blocks=1 00:27:22.120 00:27:22.120 ' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.120 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78239 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78239 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78239 ']' 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.121 13:21:28 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:22.121 [2024-12-06 13:21:28.621910] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:27:22.121 [2024-12-06 13:21:28.622264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78239 ] 00:27:22.379 [2024-12-06 13:21:28.822185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.637 [2024-12-06 13:21:28.924532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.205 13:21:29 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.205 13:21:29 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:23.205 13:21:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:23.205 13:21:29 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:27:23.205 13:21:29 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:23.205 13:21:29 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:27:23.205 13:21:29 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:27:23.205 13:21:29 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:23.514 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:23.514 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:27:23.514 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:23.514 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:23.514 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:23.514 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:23.514 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:23.514 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:23.790 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:23.790 { 00:27:23.790 "name": "nvme0n1", 00:27:23.790 "aliases": [ 00:27:23.790 "3ad8a81e-c189-431a-a376-855398090f97" 00:27:23.790 ], 00:27:23.790 "product_name": "NVMe disk", 00:27:23.790 "block_size": 4096, 00:27:23.790 "num_blocks": 1310720, 00:27:23.790 "uuid": "3ad8a81e-c189-431a-a376-855398090f97", 00:27:23.790 "numa_id": -1, 00:27:23.790 "assigned_rate_limits": { 00:27:23.790 "rw_ios_per_sec": 0, 00:27:23.790 "rw_mbytes_per_sec": 0, 00:27:23.790 "r_mbytes_per_sec": 0, 00:27:23.790 "w_mbytes_per_sec": 0 00:27:23.790 }, 00:27:23.790 "claimed": true, 00:27:23.790 "claim_type": "read_many_write_one", 00:27:23.790 "zoned": false, 00:27:23.790 "supported_io_types": { 00:27:23.790 "read": true, 00:27:23.790 "write": true, 00:27:23.790 "unmap": true, 00:27:23.790 "flush": true, 00:27:23.790 "reset": true, 00:27:23.790 "nvme_admin": true, 00:27:23.790 "nvme_io": true, 00:27:23.790 "nvme_io_md": false, 00:27:23.790 "write_zeroes": true, 00:27:23.790 "zcopy": false, 00:27:23.790 "get_zone_info": false, 00:27:23.790 "zone_management": false, 00:27:23.790 "zone_append": false, 00:27:23.790 "compare": true, 00:27:23.790 "compare_and_write": false, 00:27:23.790 "abort": true, 00:27:23.790 "seek_hole": false, 00:27:23.790 "seek_data": false, 00:27:23.790 "copy": true, 00:27:23.790 "nvme_iov_md": false 00:27:23.790 }, 00:27:23.790 "driver_specific": { 00:27:23.790 "nvme": [ 00:27:23.790 { 00:27:23.790 "pci_address": "0000:00:11.0", 00:27:23.790 "trid": { 00:27:23.790 "trtype": "PCIe", 00:27:23.790 "traddr": "0000:00:11.0" 00:27:23.790 }, 00:27:23.790 "ctrlr_data": { 00:27:23.790 "cntlid": 0, 00:27:23.790 "vendor_id": "0x1b36", 00:27:23.790 "model_number": "QEMU NVMe Ctrl", 00:27:23.790 "serial_number": "12341", 00:27:23.790 "firmware_revision": "8.0.0", 00:27:23.790 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:23.790 "oacs": { 00:27:23.790 "security": 0, 00:27:23.790 "format": 1, 00:27:23.790 "firmware": 0, 00:27:23.790 "ns_manage": 1 00:27:23.790 }, 00:27:23.790 "multi_ctrlr": false, 00:27:23.790 "ana_reporting": false 00:27:23.790 }, 00:27:23.790 "vs": { 00:27:23.790 "nvme_version": "1.4" 00:27:23.790 }, 00:27:23.790 "ns_data": { 00:27:23.790 "id": 1, 00:27:23.790 "can_share": false 00:27:23.790 } 00:27:23.790 } 00:27:23.790 ], 00:27:23.790 "mp_policy": "active_passive" 00:27:23.790 } 00:27:23.790 } 00:27:23.790 ]' 00:27:23.790 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:24.048 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:24.048 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:24.048 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:24.048 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:24.048 13:21:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:27:24.048 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:27:24.048 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:24.048 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:27:24.048 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:24.048 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:24.306 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=89c098a9-f0b4-4b99-9f49-a4e4201d5a40 00:27:24.306 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:27:24.306 13:21:30 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 89c098a9-f0b4-4b99-9f49-a4e4201d5a40 00:27:24.565 13:21:31 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:25.131 13:21:31 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=3efd8643-8425-42f5-ae1f-591a76800e7a 00:27:25.131 13:21:31 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3efd8643-8425-42f5-ae1f-591a76800e7a 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=0758f00a-e674-4e92-98f1-683eec29a296 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0758f00a-e674-4e92-98f1-683eec29a296 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=0758f00a-e674-4e92-98f1-683eec29a296 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 0758f00a-e674-4e92-98f1-683eec29a296 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0758f00a-e674-4e92-98f1-683eec29a296 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:25.389 13:21:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0758f00a-e674-4e92-98f1-683eec29a296 00:27:25.646 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:25.646 { 00:27:25.646 "name": "0758f00a-e674-4e92-98f1-683eec29a296", 00:27:25.646 "aliases": [ 00:27:25.646 "lvs/nvme0n1p0" 00:27:25.646 ], 00:27:25.646 "product_name": "Logical Volume", 00:27:25.646 "block_size": 4096, 00:27:25.646 "num_blocks": 26476544, 00:27:25.646 "uuid": "0758f00a-e674-4e92-98f1-683eec29a296", 00:27:25.646 "assigned_rate_limits": { 00:27:25.646 "rw_ios_per_sec": 0, 00:27:25.646 "rw_mbytes_per_sec": 0, 00:27:25.646 "r_mbytes_per_sec": 0, 00:27:25.646 "w_mbytes_per_sec": 0 00:27:25.646 }, 00:27:25.646 "claimed": false, 00:27:25.646 "zoned": false, 00:27:25.646 "supported_io_types": { 00:27:25.646 "read": true, 00:27:25.646 "write": true, 00:27:25.646 "unmap": true, 00:27:25.647 "flush": false, 00:27:25.647 "reset": true, 00:27:25.647 "nvme_admin": false, 00:27:25.647 "nvme_io": false, 00:27:25.647 "nvme_io_md": false, 00:27:25.647 "write_zeroes": true, 00:27:25.647 "zcopy": false, 00:27:25.647 "get_zone_info": false, 00:27:25.647 "zone_management": false, 00:27:25.647 "zone_append": false, 00:27:25.647 "compare": false, 00:27:25.647 "compare_and_write": false, 00:27:25.647 "abort": false, 00:27:25.647 "seek_hole": true, 00:27:25.647 "seek_data": true, 00:27:25.647 "copy": false, 00:27:25.647 "nvme_iov_md": false 00:27:25.647 }, 00:27:25.647 "driver_specific": { 00:27:25.647 "lvol": { 00:27:25.647 "lvol_store_uuid": "3efd8643-8425-42f5-ae1f-591a76800e7a", 00:27:25.647 "base_bdev": "nvme0n1", 00:27:25.647 "thin_provision": true, 00:27:25.647 "num_allocated_clusters": 0, 00:27:25.647 "snapshot": false, 00:27:25.647 "clone": false, 00:27:25.647 "esnap_clone": false 00:27:25.647 } 00:27:25.647 } 00:27:25.647 } 00:27:25.647 ]' 00:27:25.647 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:25.647 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:25.647 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:25.647 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:25.647 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:25.647 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:25.647 13:21:32 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:27:25.647 13:21:32 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:27:25.647 13:21:32 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:26.213 13:21:32 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:26.213 13:21:32 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:26.213 13:21:32 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 0758f00a-e674-4e92-98f1-683eec29a296 00:27:26.213 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0758f00a-e674-4e92-98f1-683eec29a296 00:27:26.213 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:26.213 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:26.213 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:26.213 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0758f00a-e674-4e92-98f1-683eec29a296 00:27:26.471 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:26.471 { 00:27:26.471 "name": "0758f00a-e674-4e92-98f1-683eec29a296", 00:27:26.471 "aliases": [ 00:27:26.471 "lvs/nvme0n1p0" 00:27:26.471 ], 00:27:26.471 "product_name": "Logical Volume", 00:27:26.471 "block_size": 4096, 00:27:26.471 "num_blocks": 26476544, 00:27:26.471 "uuid": "0758f00a-e674-4e92-98f1-683eec29a296", 00:27:26.471 "assigned_rate_limits": { 00:27:26.471 "rw_ios_per_sec": 0, 00:27:26.471 "rw_mbytes_per_sec": 0, 00:27:26.471 "r_mbytes_per_sec": 0, 00:27:26.471 "w_mbytes_per_sec": 0 00:27:26.471 }, 00:27:26.471 "claimed": false, 00:27:26.471 "zoned": false, 00:27:26.471 "supported_io_types": { 00:27:26.471 "read": true, 00:27:26.471 "write": true, 00:27:26.471 "unmap": true, 00:27:26.471 "flush": false, 00:27:26.471 "reset": true, 00:27:26.471 "nvme_admin": false, 00:27:26.471 "nvme_io": false, 00:27:26.471 "nvme_io_md": false, 00:27:26.471 "write_zeroes": true, 00:27:26.471 "zcopy": false, 00:27:26.471 "get_zone_info": false, 00:27:26.471 "zone_management": false, 00:27:26.471 "zone_append": false, 00:27:26.471 "compare": false, 00:27:26.471 "compare_and_write": false, 00:27:26.471 "abort": false, 00:27:26.471 "seek_hole": true, 00:27:26.471 "seek_data": true, 00:27:26.471 "copy": false, 00:27:26.471 "nvme_iov_md": false 00:27:26.471 }, 00:27:26.471 "driver_specific": { 00:27:26.471 "lvol": { 00:27:26.471 "lvol_store_uuid": "3efd8643-8425-42f5-ae1f-591a76800e7a", 00:27:26.471 "base_bdev": "nvme0n1", 00:27:26.471 "thin_provision": true, 00:27:26.471 "num_allocated_clusters": 0, 00:27:26.471 "snapshot": false, 00:27:26.471 "clone": false, 00:27:26.471 "esnap_clone": false 00:27:26.471 } 00:27:26.471 } 00:27:26.471 } 00:27:26.471 ]' 00:27:26.472 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:26.472 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:26.472 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:26.472 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:26.472 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:26.472 13:21:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:26.472 13:21:32 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:27:26.472 13:21:32 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:26.730 13:21:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:27:26.730 13:21:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 0758f00a-e674-4e92-98f1-683eec29a296 00:27:26.730 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=0758f00a-e674-4e92-98f1-683eec29a296 00:27:26.730 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:26.730 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:26.730 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:26.730 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0758f00a-e674-4e92-98f1-683eec29a296 00:27:26.987 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:26.987 { 00:27:26.987 "name": "0758f00a-e674-4e92-98f1-683eec29a296", 00:27:26.987 "aliases": [ 00:27:26.987 "lvs/nvme0n1p0" 00:27:26.987 ], 00:27:26.987 "product_name": "Logical Volume", 00:27:26.987 "block_size": 4096, 00:27:26.987 "num_blocks": 26476544, 00:27:26.987 "uuid": "0758f00a-e674-4e92-98f1-683eec29a296", 00:27:26.987 "assigned_rate_limits": { 00:27:26.987 "rw_ios_per_sec": 0, 00:27:26.987 "rw_mbytes_per_sec": 0, 00:27:26.987 "r_mbytes_per_sec": 0, 00:27:26.987 "w_mbytes_per_sec": 0 00:27:26.987 }, 00:27:26.987 "claimed": false, 00:27:26.987 "zoned": false, 00:27:26.987 "supported_io_types": { 00:27:26.988 "read": true, 00:27:26.988 "write": true, 00:27:26.988 "unmap": true, 00:27:26.988 "flush": false, 00:27:26.988 "reset": true, 00:27:26.988 "nvme_admin": false, 00:27:26.988 "nvme_io": false, 00:27:26.988 "nvme_io_md": false, 00:27:26.988 "write_zeroes": true, 00:27:26.988 "zcopy": false, 00:27:26.988 "get_zone_info": false, 00:27:26.988 "zone_management": false, 00:27:26.988 "zone_append": false, 00:27:26.988 "compare": false, 00:27:26.988 "compare_and_write": false, 00:27:26.988 "abort": false, 00:27:26.988 "seek_hole": true, 00:27:26.988 "seek_data": true, 00:27:26.988 "copy": false, 00:27:26.988 "nvme_iov_md": false 00:27:26.988 }, 00:27:26.988 "driver_specific": { 00:27:26.988 "lvol": { 00:27:26.988 "lvol_store_uuid": "3efd8643-8425-42f5-ae1f-591a76800e7a", 00:27:26.988 "base_bdev": "nvme0n1", 00:27:26.988 "thin_provision": true, 00:27:26.988 "num_allocated_clusters": 0, 00:27:26.988 "snapshot": false, 00:27:26.988 "clone": false, 00:27:26.988 "esnap_clone": false 00:27:26.988 } 00:27:26.988 } 00:27:26.988 } 00:27:26.988 ]' 00:27:26.988 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:26.988 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:26.988 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:27.245 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:27.245 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:27.245 13:21:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:27.245 13:21:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:27:27.245 13:21:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0758f00a-e674-4e92-98f1-683eec29a296 -c nvc0n1p0 --l2p_dram_limit 20 00:27:27.503 [2024-12-06 13:21:33.806403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.503 [2024-12-06 13:21:33.806483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:27.503 [2024-12-06 13:21:33.806506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:27.503 [2024-12-06 13:21:33.806521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.503 [2024-12-06 13:21:33.806603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.504 [2024-12-06 13:21:33.806623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:27.504 [2024-12-06 13:21:33.806636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:27.504 [2024-12-06 13:21:33.806650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.504 [2024-12-06 13:21:33.806676] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:27.504 [2024-12-06 13:21:33.807698] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:27.504 [2024-12-06 13:21:33.807736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.504 [2024-12-06 13:21:33.807753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:27.504 [2024-12-06 13:21:33.807766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:27:27.504 [2024-12-06 13:21:33.807779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.504 [2024-12-06 13:21:33.807988] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0daad8ef-fbe6-4877-a563-7da52651bd38 00:27:27.504 [2024-12-06 13:21:33.809047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.504 [2024-12-06 13:21:33.809087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:27.504 [2024-12-06 13:21:33.809110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:27.504 [2024-12-06 13:21:33.809122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.504 [2024-12-06 13:21:33.813869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.504 [2024-12-06 13:21:33.813933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:27.504 [2024-12-06 13:21:33.813953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.691 ms 00:27:27.504 [2024-12-06 13:21:33.813968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.504 [2024-12-06 13:21:33.814102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.504 [2024-12-06 13:21:33.814122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:27.504 [2024-12-06 13:21:33.814142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:27.504 [2024-12-06 13:21:33.814154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.504 [2024-12-06 13:21:33.814227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.504 [2024-12-06 13:21:33.814244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:27.504 [2024-12-06 13:21:33.814259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:27.504 [2024-12-06 13:21:33.814270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.504 [2024-12-06 13:21:33.814321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:27.504 [2024-12-06 13:21:33.818896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.504 [2024-12-06 13:21:33.818946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:27.504 [2024-12-06 13:21:33.818963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.605 ms 00:27:27.504 [2024-12-06 13:21:33.818981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.504 [2024-12-06 13:21:33.819047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.504 [2024-12-06 13:21:33.819068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:27.504 [2024-12-06 13:21:33.819081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:27.504 [2024-12-06 13:21:33.819094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.504 [2024-12-06 13:21:33.819149] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:27.504 [2024-12-06 13:21:33.819319] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:27.504 [2024-12-06 13:21:33.819337] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:27.504 [2024-12-06 13:21:33.819355] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:27.504 [2024-12-06 13:21:33.819370] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:27.504 [2024-12-06 13:21:33.819386] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:27.504 [2024-12-06 13:21:33.819398] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:27.504 [2024-12-06 13:21:33.819413] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:27.504 [2024-12-06 13:21:33.819424] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:27.504 [2024-12-06 13:21:33.819437] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:27.504 [2024-12-06 13:21:33.819452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.504 [2024-12-06 13:21:33.819465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:27.504 [2024-12-06 13:21:33.819477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:27:27.504 [2024-12-06 13:21:33.819504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.504 [2024-12-06 13:21:33.819602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.504 [2024-12-06 13:21:33.819620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:27.504 [2024-12-06 13:21:33.819632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:27:27.504 [2024-12-06 13:21:33.819647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.504 [2024-12-06 13:21:33.819749] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:27.504 [2024-12-06 13:21:33.819769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:27.504 [2024-12-06 13:21:33.819782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:27.504 [2024-12-06 13:21:33.819795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.504 [2024-12-06 13:21:33.819807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:27.504 [2024-12-06 13:21:33.819819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:27.504 [2024-12-06 13:21:33.819829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:27.504 [2024-12-06 13:21:33.819862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:27.504 [2024-12-06 13:21:33.819878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:27.504 [2024-12-06 13:21:33.819891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:27.504 [2024-12-06 13:21:33.819901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:27.504 [2024-12-06 13:21:33.819929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:27.504 [2024-12-06 13:21:33.819940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:27.504 [2024-12-06 13:21:33.819953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:27.504 [2024-12-06 13:21:33.819964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:27.504 [2024-12-06 13:21:33.819978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.504 [2024-12-06 13:21:33.819988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:27.504 [2024-12-06 13:21:33.820000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:27.504 [2024-12-06 13:21:33.820011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.504 [2024-12-06 13:21:33.820023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:27.504 [2024-12-06 13:21:33.820034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:27.504 [2024-12-06 13:21:33.820045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.504 [2024-12-06 13:21:33.820056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:27.504 [2024-12-06 13:21:33.820068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:27.504 [2024-12-06 13:21:33.820078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.504 [2024-12-06 13:21:33.820090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:27.504 [2024-12-06 13:21:33.820100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:27.504 [2024-12-06 13:21:33.820112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.504 [2024-12-06 13:21:33.820122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:27.504 [2024-12-06 13:21:33.820135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:27.504 [2024-12-06 13:21:33.820145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.504 [2024-12-06 13:21:33.820159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:27.504 [2024-12-06 13:21:33.820169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:27.504 [2024-12-06 13:21:33.820181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:27.504 [2024-12-06 13:21:33.820194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:27.504 [2024-12-06 13:21:33.820208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:27.504 [2024-12-06 13:21:33.820218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:27.504 [2024-12-06 13:21:33.820230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:27.504 [2024-12-06 13:21:33.820240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:27.504 [2024-12-06 13:21:33.820253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.504 [2024-12-06 13:21:33.820263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:27.504 [2024-12-06 13:21:33.820275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:27.504 [2024-12-06 13:21:33.820285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.504 [2024-12-06 13:21:33.820297] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:27.504 [2024-12-06 13:21:33.820309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:27.504 [2024-12-06 13:21:33.820321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:27.504 [2024-12-06 13:21:33.820333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.504 [2024-12-06 13:21:33.820348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:27.504 [2024-12-06 13:21:33.820359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:27.504 [2024-12-06 13:21:33.820371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:27.504 [2024-12-06 13:21:33.820382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:27.504 [2024-12-06 13:21:33.820393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:27.505 [2024-12-06 13:21:33.820404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:27.505 [2024-12-06 13:21:33.820418] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:27.505 [2024-12-06 13:21:33.820432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:27.505 [2024-12-06 13:21:33.820447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:27.505 [2024-12-06 13:21:33.820459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:27.505 [2024-12-06 13:21:33.820472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:27.505 [2024-12-06 13:21:33.820483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:27.505 [2024-12-06 13:21:33.820496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:27.505 [2024-12-06 13:21:33.820507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:27.505 [2024-12-06 13:21:33.820520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:27.505 [2024-12-06 13:21:33.820531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:27.505 [2024-12-06 13:21:33.820548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:27.505 [2024-12-06 13:21:33.820559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:27.505 [2024-12-06 13:21:33.820572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:27.505 [2024-12-06 13:21:33.820585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:27.505 [2024-12-06 13:21:33.820598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:27.505 [2024-12-06 13:21:33.820610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:27.505 [2024-12-06 13:21:33.820623] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:27.505 [2024-12-06 13:21:33.820635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:27.505 [2024-12-06 13:21:33.820652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:27.505 [2024-12-06 13:21:33.820673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:27.505 [2024-12-06 13:21:33.820686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:27.505 [2024-12-06 13:21:33.820697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:27.505 [2024-12-06 13:21:33.820711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.505 [2024-12-06 13:21:33.820723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:27.505 [2024-12-06 13:21:33.820737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:27:27.505 [2024-12-06 13:21:33.820748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.505 [2024-12-06 13:21:33.820798] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:27.505 [2024-12-06 13:21:33.820814] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:29.402 [2024-12-06 13:21:35.722130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.722209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:29.402 [2024-12-06 13:21:35.722235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1901.336 ms 00:27:29.402 [2024-12-06 13:21:35.722248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.402 [2024-12-06 13:21:35.755157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.755222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:29.402 [2024-12-06 13:21:35.755246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.616 ms 00:27:29.402 [2024-12-06 13:21:35.755258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.402 [2024-12-06 13:21:35.755441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.755461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:29.402 [2024-12-06 13:21:35.755493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:29.402 [2024-12-06 13:21:35.755510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.402 [2024-12-06 13:21:35.809908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.809975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:29.402 [2024-12-06 13:21:35.809999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.338 ms 00:27:29.402 [2024-12-06 13:21:35.810012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.402 [2024-12-06 13:21:35.810084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.810100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:29.402 [2024-12-06 13:21:35.810116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:29.402 [2024-12-06 13:21:35.810130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.402 [2024-12-06 13:21:35.810560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.810583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:29.402 [2024-12-06 13:21:35.810599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:27:29.402 [2024-12-06 13:21:35.810611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.402 [2024-12-06 13:21:35.810760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.810778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:29.402 [2024-12-06 13:21:35.810794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:27:29.402 [2024-12-06 13:21:35.810805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.402 [2024-12-06 13:21:35.827553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.827636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:29.402 [2024-12-06 13:21:35.827663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.712 ms 00:27:29.402 [2024-12-06 13:21:35.827691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.402 [2024-12-06 13:21:35.843282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:27:29.402 [2024-12-06 13:21:35.848378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.848436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:29.402 [2024-12-06 13:21:35.848457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.547 ms 00:27:29.402 [2024-12-06 13:21:35.848471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.402 [2024-12-06 13:21:35.905703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.905781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:29.402 [2024-12-06 13:21:35.905804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.175 ms 00:27:29.402 [2024-12-06 13:21:35.905819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.402 [2024-12-06 13:21:35.906052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.402 [2024-12-06 13:21:35.906080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:29.402 [2024-12-06 13:21:35.906095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:27:29.402 [2024-12-06 13:21:35.906130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.661 [2024-12-06 13:21:35.940144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.661 [2024-12-06 13:21:35.940248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:29.661 [2024-12-06 13:21:35.940280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.919 ms 00:27:29.661 [2024-12-06 13:21:35.940308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.661 [2024-12-06 13:21:35.972458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.661 [2024-12-06 13:21:35.972525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:29.661 [2024-12-06 13:21:35.972547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.058 ms 00:27:29.661 [2024-12-06 13:21:35.972561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.661 [2024-12-06 13:21:35.973323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.661 [2024-12-06 13:21:35.973363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:29.661 [2024-12-06 13:21:35.973380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:27:29.661 [2024-12-06 13:21:35.973394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.661 [2024-12-06 13:21:36.059531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.661 [2024-12-06 13:21:36.059620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:29.661 [2024-12-06 13:21:36.059644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.062 ms 00:27:29.661 [2024-12-06 13:21:36.059659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.661 [2024-12-06 13:21:36.093818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.661 [2024-12-06 13:21:36.093908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:29.661 [2024-12-06 13:21:36.093933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.028 ms 00:27:29.661 [2024-12-06 13:21:36.093948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.661 [2024-12-06 13:21:36.127984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.661 [2024-12-06 13:21:36.128053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:29.661 [2024-12-06 13:21:36.128074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.978 ms 00:27:29.661 [2024-12-06 13:21:36.128088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.661 [2024-12-06 13:21:36.161253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.661 [2024-12-06 13:21:36.161329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:29.661 [2024-12-06 13:21:36.161350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.104 ms 00:27:29.661 [2024-12-06 13:21:36.161365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.661 [2024-12-06 13:21:36.161427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.661 [2024-12-06 13:21:36.161453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:29.661 [2024-12-06 13:21:36.161467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:29.661 [2024-12-06 13:21:36.161481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.661 [2024-12-06 13:21:36.161648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.661 [2024-12-06 13:21:36.161683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:29.661 [2024-12-06 13:21:36.161704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:29.661 [2024-12-06 13:21:36.161727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.661 [2024-12-06 13:21:36.163105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2356.112 ms, result 0 00:27:29.661 { 00:27:29.661 "name": "ftl0", 00:27:29.661 "uuid": "0daad8ef-fbe6-4877-a563-7da52651bd38" 00:27:29.661 } 00:27:29.661 13:21:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:27:29.661 13:21:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:27:29.920 13:21:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:27:30.178 13:21:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:27:30.178 [2024-12-06 13:21:36.583308] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:30.178 I/O size of 69632 is greater than zero copy threshold (65536). 00:27:30.178 Zero copy mechanism will not be used. 00:27:30.178 Running I/O for 4 seconds... 00:27:32.113 2241.00 IOPS, 148.82 MiB/s [2024-12-06T13:21:40.017Z] 2132.50 IOPS, 141.61 MiB/s [2024-12-06T13:21:40.953Z] 2066.67 IOPS, 137.24 MiB/s [2024-12-06T13:21:40.953Z] 2091.00 IOPS, 138.86 MiB/s 00:27:34.425 Latency(us) 00:27:34.425 [2024-12-06T13:21:40.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.425 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:27:34.425 ftl0 : 4.00 2090.11 138.80 0.00 0.00 500.97 220.63 4706.68 00:27:34.425 [2024-12-06T13:21:40.953Z] =================================================================================================================== 00:27:34.425 [2024-12-06T13:21:40.953Z] Total : 2090.11 138.80 0.00 0.00 500.97 220.63 4706.68 00:27:34.425 [2024-12-06 13:21:40.595287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:34.425 { 00:27:34.425 "results": [ 00:27:34.425 { 00:27:34.425 "job": "ftl0", 00:27:34.425 "core_mask": "0x1", 00:27:34.425 "workload": "randwrite", 00:27:34.425 "status": "finished", 00:27:34.425 "queue_depth": 1, 00:27:34.425 "io_size": 69632, 00:27:34.425 "runtime": 4.002191, 00:27:34.425 "iops": 2090.1051449068773, 00:27:34.425 "mibps": 138.79604477897232, 00:27:34.425 "io_failed": 0, 00:27:34.425 "io_timeout": 0, 00:27:34.425 "avg_latency_us": 500.9711477476498, 00:27:34.425 "min_latency_us": 220.62545454545455, 00:27:34.425 "max_latency_us": 4706.676363636364 00:27:34.425 } 00:27:34.425 ], 00:27:34.425 "core_count": 1 00:27:34.425 } 00:27:34.425 13:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:27:34.425 [2024-12-06 13:21:40.754325] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:34.425 Running I/O for 4 seconds... 00:27:36.296 7322.00 IOPS, 28.60 MiB/s [2024-12-06T13:21:44.203Z] 6818.00 IOPS, 26.63 MiB/s [2024-12-06T13:21:44.769Z] 6568.33 IOPS, 25.66 MiB/s [2024-12-06T13:21:45.028Z] 6578.50 IOPS, 25.70 MiB/s 00:27:38.500 Latency(us) 00:27:38.500 [2024-12-06T13:21:45.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.500 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:27:38.500 ftl0 : 4.02 6570.52 25.67 0.00 0.00 19420.70 428.22 39321.60 00:27:38.500 [2024-12-06T13:21:45.028Z] =================================================================================================================== 00:27:38.500 [2024-12-06T13:21:45.028Z] Total : 6570.52 25.67 0.00 0.00 19420.70 0.00 39321.60 00:27:38.500 [2024-12-06 13:21:44.790176] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:38.500 { 00:27:38.500 "results": [ 00:27:38.500 { 00:27:38.500 "job": "ftl0", 00:27:38.500 "core_mask": "0x1", 00:27:38.500 "workload": "randwrite", 00:27:38.500 "status": "finished", 00:27:38.500 "queue_depth": 128, 00:27:38.500 "io_size": 4096, 00:27:38.500 "runtime": 4.024339, 00:27:38.500 "iops": 6570.520028258057, 00:27:38.500 "mibps": 25.666093860383036, 00:27:38.500 "io_failed": 0, 00:27:38.500 "io_timeout": 0, 00:27:38.500 "avg_latency_us": 19420.7025198204, 00:27:38.500 "min_latency_us": 428.2181818181818, 00:27:38.500 "max_latency_us": 39321.6 00:27:38.500 } 00:27:38.500 ], 00:27:38.500 "core_count": 1 00:27:38.500 } 00:27:38.500 13:21:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:27:38.500 [2024-12-06 13:21:44.975687] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:38.500 Running I/O for 4 seconds... 00:27:40.806 5220.00 IOPS, 20.39 MiB/s [2024-12-06T13:21:48.268Z] 5293.50 IOPS, 20.68 MiB/s [2024-12-06T13:21:49.203Z] 5453.00 IOPS, 21.30 MiB/s [2024-12-06T13:21:49.203Z] 5506.75 IOPS, 21.51 MiB/s 00:27:42.675 Latency(us) 00:27:42.675 [2024-12-06T13:21:49.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.675 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:42.675 Verification LBA range: start 0x0 length 0x1400000 00:27:42.675 ftl0 : 4.01 5519.61 21.56 0.00 0.00 23109.09 396.57 32410.53 00:27:42.675 [2024-12-06T13:21:49.203Z] =================================================================================================================== 00:27:42.675 [2024-12-06T13:21:49.203Z] Total : 5519.61 21.56 0.00 0.00 23109.09 0.00 32410.53 00:27:42.675 [2024-12-06 13:21:49.007742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:42.675 { 00:27:42.675 "results": [ 00:27:42.675 { 00:27:42.675 "job": "ftl0", 00:27:42.675 "core_mask": "0x1", 00:27:42.675 "workload": "verify", 00:27:42.675 "status": "finished", 00:27:42.675 "verify_range": { 00:27:42.675 "start": 0, 00:27:42.675 "length": 20971520 00:27:42.675 }, 00:27:42.675 "queue_depth": 128, 00:27:42.675 "io_size": 4096, 00:27:42.675 "runtime": 4.01369, 00:27:42.675 "iops": 5519.609137726133, 00:27:42.675 "mibps": 21.560973194242706, 00:27:42.675 "io_failed": 0, 00:27:42.675 "io_timeout": 0, 00:27:42.675 "avg_latency_us": 23109.091321411277, 00:27:42.676 "min_latency_us": 396.5672727272727, 00:27:42.676 "max_latency_us": 32410.53090909091 00:27:42.676 } 00:27:42.676 ], 00:27:42.676 "core_count": 1 00:27:42.676 } 00:27:42.676 13:21:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:27:42.934 [2024-12-06 13:21:49.314546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.934 [2024-12-06 13:21:49.314619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:42.934 [2024-12-06 13:21:49.314641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:42.934 [2024-12-06 13:21:49.314657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.934 [2024-12-06 13:21:49.314691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:42.934 [2024-12-06 13:21:49.318125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.934 [2024-12-06 13:21:49.318170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:42.934 [2024-12-06 13:21:49.318190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.403 ms 00:27:42.934 [2024-12-06 13:21:49.318202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.934 [2024-12-06 13:21:49.319575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.934 [2024-12-06 13:21:49.319621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:42.934 [2024-12-06 13:21:49.319645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:27:42.934 [2024-12-06 13:21:49.319657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.202 [2024-12-06 13:21:49.534411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.202 [2024-12-06 13:21:49.534536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:43.202 [2024-12-06 13:21:49.534586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 214.705 ms 00:27:43.202 [2024-12-06 13:21:49.534613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.203 [2024-12-06 13:21:49.542095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.203 [2024-12-06 13:21:49.542144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:43.203 [2024-12-06 13:21:49.542166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.360 ms 00:27:43.203 [2024-12-06 13:21:49.542183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.203 [2024-12-06 13:21:49.573895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.203 [2024-12-06 13:21:49.573961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:43.203 [2024-12-06 13:21:49.573985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.578 ms 00:27:43.203 [2024-12-06 13:21:49.573998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.203 [2024-12-06 13:21:49.592640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.203 [2024-12-06 13:21:49.592694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:43.203 [2024-12-06 13:21:49.592717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.524 ms 00:27:43.203 [2024-12-06 13:21:49.592730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.203 [2024-12-06 13:21:49.592988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.203 [2024-12-06 13:21:49.593023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:43.203 [2024-12-06 13:21:49.593044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:27:43.203 [2024-12-06 13:21:49.593056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.203 [2024-12-06 13:21:49.624559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.203 [2024-12-06 13:21:49.624608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:43.203 [2024-12-06 13:21:49.624629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.474 ms 00:27:43.203 [2024-12-06 13:21:49.624641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.203 [2024-12-06 13:21:49.656971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.203 [2024-12-06 13:21:49.657030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:43.203 [2024-12-06 13:21:49.657053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.233 ms 00:27:43.203 [2024-12-06 13:21:49.657066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.203 [2024-12-06 13:21:49.688297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.203 [2024-12-06 13:21:49.688356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:43.203 [2024-12-06 13:21:49.688380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.116 ms 00:27:43.203 [2024-12-06 13:21:49.688394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.203 [2024-12-06 13:21:49.719887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.203 [2024-12-06 13:21:49.719957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:43.203 [2024-12-06 13:21:49.719984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.353 ms 00:27:43.203 [2024-12-06 13:21:49.719997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.203 [2024-12-06 13:21:49.720056] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:43.203 [2024-12-06 13:21:49.720082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.720991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:43.203 [2024-12-06 13:21:49.721357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:43.204 [2024-12-06 13:21:49.721369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:43.204 [2024-12-06 13:21:49.721382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:43.204 [2024-12-06 13:21:49.721398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:43.204 [2024-12-06 13:21:49.721414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:43.204 [2024-12-06 13:21:49.721426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:43.204 [2024-12-06 13:21:49.721440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:43.204 [2024-12-06 13:21:49.721462] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:43.204 [2024-12-06 13:21:49.721476] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0daad8ef-fbe6-4877-a563-7da52651bd38 00:27:43.204 [2024-12-06 13:21:49.721491] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:43.204 [2024-12-06 13:21:49.721504] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:43.204 [2024-12-06 13:21:49.721514] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:43.204 [2024-12-06 13:21:49.721528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:43.204 [2024-12-06 13:21:49.721539] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:43.204 [2024-12-06 13:21:49.721552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:43.204 [2024-12-06 13:21:49.721563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:43.204 [2024-12-06 13:21:49.721576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:43.204 [2024-12-06 13:21:49.721587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:43.204 [2024-12-06 13:21:49.721601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.204 [2024-12-06 13:21:49.721612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:43.204 [2024-12-06 13:21:49.721627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:27:43.204 [2024-12-06 13:21:49.721639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.462 [2024-12-06 13:21:49.739192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.462 [2024-12-06 13:21:49.739257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:43.462 [2024-12-06 13:21:49.739280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.469 ms 00:27:43.462 [2024-12-06 13:21:49.739293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.462 [2024-12-06 13:21:49.739980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.462 [2024-12-06 13:21:49.740041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:43.462 [2024-12-06 13:21:49.740079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:27:43.462 [2024-12-06 13:21:49.740106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.462 [2024-12-06 13:21:49.793456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.462 [2024-12-06 13:21:49.793532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:43.462 [2024-12-06 13:21:49.793558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.462 [2024-12-06 13:21:49.793571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.462 [2024-12-06 13:21:49.793666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.462 [2024-12-06 13:21:49.793684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:43.462 [2024-12-06 13:21:49.793698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.462 [2024-12-06 13:21:49.793710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.462 [2024-12-06 13:21:49.793907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.462 [2024-12-06 13:21:49.793941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:43.462 [2024-12-06 13:21:49.793959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.462 [2024-12-06 13:21:49.793971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.462 [2024-12-06 13:21:49.793999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.462 [2024-12-06 13:21:49.794014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:43.462 [2024-12-06 13:21:49.794028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.462 [2024-12-06 13:21:49.794039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.462 [2024-12-06 13:21:49.901424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.462 [2024-12-06 13:21:49.901500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:43.462 [2024-12-06 13:21:49.901537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.462 [2024-12-06 13:21:49.901564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.719 [2024-12-06 13:21:49.989913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.719 [2024-12-06 13:21:49.989997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:43.719 [2024-12-06 13:21:49.990024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.719 [2024-12-06 13:21:49.990037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.719 [2024-12-06 13:21:49.990207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.719 [2024-12-06 13:21:49.990227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:43.719 [2024-12-06 13:21:49.990243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.719 [2024-12-06 13:21:49.990255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.719 [2024-12-06 13:21:49.990325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.719 [2024-12-06 13:21:49.990343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:43.719 [2024-12-06 13:21:49.990358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.719 [2024-12-06 13:21:49.990369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.719 [2024-12-06 13:21:49.990502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.719 [2024-12-06 13:21:49.990535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:43.719 [2024-12-06 13:21:49.990556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.719 [2024-12-06 13:21:49.990568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.719 [2024-12-06 13:21:49.990626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.719 [2024-12-06 13:21:49.990644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:43.719 [2024-12-06 13:21:49.990659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.719 [2024-12-06 13:21:49.990670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.719 [2024-12-06 13:21:49.990721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.719 [2024-12-06 13:21:49.990746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:43.719 [2024-12-06 13:21:49.990762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.719 [2024-12-06 13:21:49.990785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.719 [2024-12-06 13:21:49.990862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:43.719 [2024-12-06 13:21:49.990882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:43.719 [2024-12-06 13:21:49.990898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:43.719 [2024-12-06 13:21:49.990909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.719 [2024-12-06 13:21:49.991072] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 676.492 ms, result 0 00:27:43.719 true 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78239 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78239 ']' 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78239 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78239 00:27:43.719 killing process with pid 78239 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78239' 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78239 00:27:43.719 Received shutdown signal, test time was about 4.000000 seconds 00:27:43.719 00:27:43.719 Latency(us) 00:27:43.719 [2024-12-06T13:21:50.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.719 [2024-12-06T13:21:50.247Z] =================================================================================================================== 00:27:43.719 [2024-12-06T13:21:50.247Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:43.719 13:21:50 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78239 00:27:47.904 Remove shared memory files 00:27:47.904 13:21:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:47.904 13:21:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:27:47.904 13:21:53 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:47.904 13:21:53 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:27:47.904 13:21:53 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:27:47.904 13:21:53 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:27:47.904 13:21:53 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:47.904 13:21:53 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:27:47.904 00:27:47.904 real 0m25.360s 00:27:47.904 user 0m29.640s 00:27:47.904 sys 0m1.104s 00:27:47.904 13:21:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:47.904 13:21:53 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:47.904 ************************************ 00:27:47.904 END TEST ftl_bdevperf 00:27:47.904 ************************************ 00:27:47.904 13:21:53 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:47.904 13:21:53 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:47.904 13:21:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:47.904 13:21:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:47.904 ************************************ 00:27:47.904 START TEST ftl_trim 00:27:47.904 ************************************ 00:27:47.904 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:47.904 * Looking for test storage... 00:27:47.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:47.904 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:47.904 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:27:47.904 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:47.904 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:47.904 13:21:53 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:27:47.904 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:47.904 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:47.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.904 --rc genhtml_branch_coverage=1 00:27:47.904 --rc genhtml_function_coverage=1 00:27:47.904 --rc genhtml_legend=1 00:27:47.904 --rc geninfo_all_blocks=1 00:27:47.904 --rc geninfo_unexecuted_blocks=1 00:27:47.904 00:27:47.904 ' 00:27:47.904 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:47.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.904 --rc genhtml_branch_coverage=1 00:27:47.904 --rc genhtml_function_coverage=1 00:27:47.904 --rc genhtml_legend=1 00:27:47.904 --rc geninfo_all_blocks=1 00:27:47.904 --rc geninfo_unexecuted_blocks=1 00:27:47.904 00:27:47.904 ' 00:27:47.904 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:47.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.904 --rc genhtml_branch_coverage=1 00:27:47.904 --rc genhtml_function_coverage=1 00:27:47.904 --rc genhtml_legend=1 00:27:47.904 --rc geninfo_all_blocks=1 00:27:47.904 --rc geninfo_unexecuted_blocks=1 00:27:47.904 00:27:47.904 ' 00:27:47.904 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:47.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.904 --rc genhtml_branch_coverage=1 00:27:47.904 --rc genhtml_function_coverage=1 00:27:47.904 --rc genhtml_legend=1 00:27:47.904 --rc geninfo_all_blocks=1 00:27:47.904 --rc geninfo_unexecuted_blocks=1 00:27:47.904 00:27:47.904 ' 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:47.904 13:21:53 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78588 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78588 00:27:47.905 13:21:53 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:47.905 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78588 ']' 00:27:47.905 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.905 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.905 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.905 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.905 13:21:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:47.905 [2024-12-06 13:21:54.079001] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:27:47.905 [2024-12-06 13:21:54.079985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78588 ] 00:27:47.905 [2024-12-06 13:21:54.314284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:48.164 [2024-12-06 13:21:54.450687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.164 [2024-12-06 13:21:54.450783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.164 [2024-12-06 13:21:54.450785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.096 13:21:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.096 13:21:55 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:49.096 13:21:55 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:49.096 13:21:55 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:27:49.096 13:21:55 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:49.096 13:21:55 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:27:49.096 13:21:55 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:27:49.096 13:21:55 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:49.354 13:21:55 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:49.354 13:21:55 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:27:49.354 13:21:55 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:49.354 13:21:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:49.354 13:21:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:49.354 13:21:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:49.354 13:21:55 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:49.354 13:21:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:49.611 13:21:56 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:49.611 { 00:27:49.611 "name": "nvme0n1", 00:27:49.611 "aliases": [ 00:27:49.611 "ef7c862d-496a-4162-b026-3b3df8290c8c" 00:27:49.611 ], 00:27:49.611 "product_name": "NVMe disk", 00:27:49.611 "block_size": 4096, 00:27:49.611 "num_blocks": 1310720, 00:27:49.611 "uuid": "ef7c862d-496a-4162-b026-3b3df8290c8c", 00:27:49.611 "numa_id": -1, 00:27:49.611 "assigned_rate_limits": { 00:27:49.611 "rw_ios_per_sec": 0, 00:27:49.611 "rw_mbytes_per_sec": 0, 00:27:49.611 "r_mbytes_per_sec": 0, 00:27:49.611 "w_mbytes_per_sec": 0 00:27:49.611 }, 00:27:49.611 "claimed": true, 00:27:49.611 "claim_type": "read_many_write_one", 00:27:49.611 "zoned": false, 00:27:49.611 "supported_io_types": { 00:27:49.611 "read": true, 00:27:49.611 "write": true, 00:27:49.611 "unmap": true, 00:27:49.611 "flush": true, 00:27:49.611 "reset": true, 00:27:49.611 "nvme_admin": true, 00:27:49.611 "nvme_io": true, 00:27:49.611 "nvme_io_md": false, 00:27:49.611 "write_zeroes": true, 00:27:49.611 "zcopy": false, 00:27:49.611 "get_zone_info": false, 00:27:49.611 "zone_management": false, 00:27:49.611 "zone_append": false, 00:27:49.611 "compare": true, 00:27:49.611 "compare_and_write": false, 00:27:49.611 "abort": true, 00:27:49.611 "seek_hole": false, 00:27:49.611 "seek_data": false, 00:27:49.611 "copy": true, 00:27:49.611 "nvme_iov_md": false 00:27:49.611 }, 00:27:49.611 "driver_specific": { 00:27:49.611 "nvme": [ 00:27:49.611 { 00:27:49.611 "pci_address": "0000:00:11.0", 00:27:49.611 "trid": { 00:27:49.611 "trtype": "PCIe", 00:27:49.611 "traddr": "0000:00:11.0" 00:27:49.612 }, 00:27:49.612 "ctrlr_data": { 00:27:49.612 "cntlid": 0, 00:27:49.612 "vendor_id": "0x1b36", 00:27:49.612 "model_number": "QEMU NVMe Ctrl", 00:27:49.612 "serial_number": "12341", 00:27:49.612 "firmware_revision": "8.0.0", 00:27:49.612 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:49.612 "oacs": { 00:27:49.612 "security": 0, 00:27:49.612 "format": 1, 00:27:49.612 "firmware": 0, 00:27:49.612 "ns_manage": 1 00:27:49.612 }, 00:27:49.612 "multi_ctrlr": false, 00:27:49.612 "ana_reporting": false 00:27:49.612 }, 00:27:49.612 "vs": { 00:27:49.612 "nvme_version": "1.4" 00:27:49.612 }, 00:27:49.612 "ns_data": { 00:27:49.612 "id": 1, 00:27:49.612 "can_share": false 00:27:49.612 } 00:27:49.612 } 00:27:49.612 ], 00:27:49.612 "mp_policy": "active_passive" 00:27:49.612 } 00:27:49.612 } 00:27:49.612 ]' 00:27:49.612 13:21:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:49.870 13:21:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:49.870 13:21:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:49.870 13:21:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:49.870 13:21:56 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:49.870 13:21:56 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:27:49.870 13:21:56 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:27:49.870 13:21:56 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:49.870 13:21:56 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:27:49.870 13:21:56 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:49.870 13:21:56 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:50.436 13:21:56 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=3efd8643-8425-42f5-ae1f-591a76800e7a 00:27:50.436 13:21:56 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:27:50.436 13:21:56 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3efd8643-8425-42f5-ae1f-591a76800e7a 00:27:50.694 13:21:57 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:51.259 13:21:57 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=d25c70b7-3691-4cdd-b10f-ea16a1423d35 00:27:51.259 13:21:57 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d25c70b7-3691-4cdd-b10f-ea16a1423d35 00:27:51.517 13:21:57 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:51.517 13:21:57 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:51.517 13:21:57 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:27:51.517 13:21:57 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:51.517 13:21:57 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:51.517 13:21:57 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:27:51.517 13:21:57 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:51.517 13:21:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:51.517 13:21:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:51.517 13:21:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:51.517 13:21:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:51.517 13:21:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:52.083 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:52.083 { 00:27:52.083 "name": "b1f5cdc1-afb8-4e58-90d4-b89052df1885", 00:27:52.083 "aliases": [ 00:27:52.083 "lvs/nvme0n1p0" 00:27:52.083 ], 00:27:52.083 "product_name": "Logical Volume", 00:27:52.083 "block_size": 4096, 00:27:52.083 "num_blocks": 26476544, 00:27:52.083 "uuid": "b1f5cdc1-afb8-4e58-90d4-b89052df1885", 00:27:52.083 "assigned_rate_limits": { 00:27:52.083 "rw_ios_per_sec": 0, 00:27:52.083 "rw_mbytes_per_sec": 0, 00:27:52.083 "r_mbytes_per_sec": 0, 00:27:52.083 "w_mbytes_per_sec": 0 00:27:52.083 }, 00:27:52.083 "claimed": false, 00:27:52.083 "zoned": false, 00:27:52.083 "supported_io_types": { 00:27:52.083 "read": true, 00:27:52.083 "write": true, 00:27:52.083 "unmap": true, 00:27:52.083 "flush": false, 00:27:52.083 "reset": true, 00:27:52.083 "nvme_admin": false, 00:27:52.083 "nvme_io": false, 00:27:52.083 "nvme_io_md": false, 00:27:52.083 "write_zeroes": true, 00:27:52.083 "zcopy": false, 00:27:52.083 "get_zone_info": false, 00:27:52.083 "zone_management": false, 00:27:52.083 "zone_append": false, 00:27:52.083 "compare": false, 00:27:52.083 "compare_and_write": false, 00:27:52.083 "abort": false, 00:27:52.083 "seek_hole": true, 00:27:52.083 "seek_data": true, 00:27:52.083 "copy": false, 00:27:52.083 "nvme_iov_md": false 00:27:52.083 }, 00:27:52.083 "driver_specific": { 00:27:52.083 "lvol": { 00:27:52.083 "lvol_store_uuid": "d25c70b7-3691-4cdd-b10f-ea16a1423d35", 00:27:52.083 "base_bdev": "nvme0n1", 00:27:52.083 "thin_provision": true, 00:27:52.083 "num_allocated_clusters": 0, 00:27:52.083 "snapshot": false, 00:27:52.083 "clone": false, 00:27:52.083 "esnap_clone": false 00:27:52.083 } 00:27:52.083 } 00:27:52.083 } 00:27:52.083 ]' 00:27:52.083 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:52.083 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:52.083 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:52.083 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:52.083 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:52.083 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:52.083 13:21:58 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:27:52.083 13:21:58 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:27:52.083 13:21:58 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:52.650 13:21:58 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:52.650 13:21:58 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:52.650 13:21:58 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:52.650 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:52.650 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:52.650 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:52.650 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:52.650 13:21:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:52.908 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:52.908 { 00:27:52.908 "name": "b1f5cdc1-afb8-4e58-90d4-b89052df1885", 00:27:52.908 "aliases": [ 00:27:52.908 "lvs/nvme0n1p0" 00:27:52.908 ], 00:27:52.908 "product_name": "Logical Volume", 00:27:52.908 "block_size": 4096, 00:27:52.908 "num_blocks": 26476544, 00:27:52.908 "uuid": "b1f5cdc1-afb8-4e58-90d4-b89052df1885", 00:27:52.908 "assigned_rate_limits": { 00:27:52.908 "rw_ios_per_sec": 0, 00:27:52.908 "rw_mbytes_per_sec": 0, 00:27:52.908 "r_mbytes_per_sec": 0, 00:27:52.908 "w_mbytes_per_sec": 0 00:27:52.908 }, 00:27:52.908 "claimed": false, 00:27:52.908 "zoned": false, 00:27:52.908 "supported_io_types": { 00:27:52.908 "read": true, 00:27:52.908 "write": true, 00:27:52.908 "unmap": true, 00:27:52.908 "flush": false, 00:27:52.908 "reset": true, 00:27:52.908 "nvme_admin": false, 00:27:52.908 "nvme_io": false, 00:27:52.908 "nvme_io_md": false, 00:27:52.908 "write_zeroes": true, 00:27:52.908 "zcopy": false, 00:27:52.908 "get_zone_info": false, 00:27:52.908 "zone_management": false, 00:27:52.908 "zone_append": false, 00:27:52.908 "compare": false, 00:27:52.908 "compare_and_write": false, 00:27:52.908 "abort": false, 00:27:52.908 "seek_hole": true, 00:27:52.908 "seek_data": true, 00:27:52.908 "copy": false, 00:27:52.908 "nvme_iov_md": false 00:27:52.908 }, 00:27:52.908 "driver_specific": { 00:27:52.908 "lvol": { 00:27:52.908 "lvol_store_uuid": "d25c70b7-3691-4cdd-b10f-ea16a1423d35", 00:27:52.908 "base_bdev": "nvme0n1", 00:27:52.908 "thin_provision": true, 00:27:52.908 "num_allocated_clusters": 0, 00:27:52.908 "snapshot": false, 00:27:52.908 "clone": false, 00:27:52.908 "esnap_clone": false 00:27:52.908 } 00:27:52.908 } 00:27:52.908 } 00:27:52.908 ]' 00:27:52.908 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:52.908 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:52.908 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:53.166 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:53.166 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:53.166 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:53.166 13:21:59 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:27:53.166 13:21:59 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:53.424 13:21:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:27:53.424 13:21:59 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:27:53.424 13:21:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:53.424 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:53.424 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:53.424 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:53.424 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:53.424 13:21:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1f5cdc1-afb8-4e58-90d4-b89052df1885 00:27:53.991 13:22:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:53.991 { 00:27:53.991 "name": "b1f5cdc1-afb8-4e58-90d4-b89052df1885", 00:27:53.991 "aliases": [ 00:27:53.991 "lvs/nvme0n1p0" 00:27:53.991 ], 00:27:53.991 "product_name": "Logical Volume", 00:27:53.991 "block_size": 4096, 00:27:53.991 "num_blocks": 26476544, 00:27:53.991 "uuid": "b1f5cdc1-afb8-4e58-90d4-b89052df1885", 00:27:53.991 "assigned_rate_limits": { 00:27:53.991 "rw_ios_per_sec": 0, 00:27:53.991 "rw_mbytes_per_sec": 0, 00:27:53.991 "r_mbytes_per_sec": 0, 00:27:53.991 "w_mbytes_per_sec": 0 00:27:53.991 }, 00:27:53.991 "claimed": false, 00:27:53.991 "zoned": false, 00:27:53.991 "supported_io_types": { 00:27:53.991 "read": true, 00:27:53.991 "write": true, 00:27:53.991 "unmap": true, 00:27:53.991 "flush": false, 00:27:53.991 "reset": true, 00:27:53.991 "nvme_admin": false, 00:27:53.991 "nvme_io": false, 00:27:53.991 "nvme_io_md": false, 00:27:53.991 "write_zeroes": true, 00:27:53.991 "zcopy": false, 00:27:53.991 "get_zone_info": false, 00:27:53.991 "zone_management": false, 00:27:53.991 "zone_append": false, 00:27:53.991 "compare": false, 00:27:53.991 "compare_and_write": false, 00:27:53.991 "abort": false, 00:27:53.991 "seek_hole": true, 00:27:53.991 "seek_data": true, 00:27:53.991 "copy": false, 00:27:53.991 "nvme_iov_md": false 00:27:53.991 }, 00:27:53.991 "driver_specific": { 00:27:53.991 "lvol": { 00:27:53.991 "lvol_store_uuid": "d25c70b7-3691-4cdd-b10f-ea16a1423d35", 00:27:53.991 "base_bdev": "nvme0n1", 00:27:53.991 "thin_provision": true, 00:27:53.991 "num_allocated_clusters": 0, 00:27:53.991 "snapshot": false, 00:27:53.991 "clone": false, 00:27:53.991 "esnap_clone": false 00:27:53.991 } 00:27:53.991 } 00:27:53.991 } 00:27:53.991 ]' 00:27:53.991 13:22:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:53.991 13:22:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:53.991 13:22:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:53.991 13:22:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:53.991 13:22:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:53.991 13:22:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:53.991 13:22:00 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:27:53.992 13:22:00 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b1f5cdc1-afb8-4e58-90d4-b89052df1885 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:27:54.251 [2024-12-06 13:22:00.657888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.658796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:54.251 [2024-12-06 13:22:00.658890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:54.251 [2024-12-06 13:22:00.658918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.665021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.665331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:54.251 [2024-12-06 13:22:00.665565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.026 ms 00:27:54.251 [2024-12-06 13:22:00.665879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.666385] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:54.251 [2024-12-06 13:22:00.668225] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:54.251 [2024-12-06 13:22:00.668598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.668896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:54.251 [2024-12-06 13:22:00.669194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:27:54.251 [2024-12-06 13:22:00.669468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.670141] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c9c5936a-1bb5-432f-b1c3-6cf254b3be43 00:27:54.251 [2024-12-06 13:22:00.671869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.672170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:54.251 [2024-12-06 13:22:00.672472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:54.251 [2024-12-06 13:22:00.672628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.678336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.678575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:54.251 [2024-12-06 13:22:00.678716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.316 ms 00:27:54.251 [2024-12-06 13:22:00.679097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.679584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.679875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:54.251 [2024-12-06 13:22:00.680172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:27:54.251 [2024-12-06 13:22:00.680323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.680642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.680788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:54.251 [2024-12-06 13:22:00.680948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:54.251 [2024-12-06 13:22:00.681226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.681417] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:54.251 [2024-12-06 13:22:00.688308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.688471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:54.251 [2024-12-06 13:22:00.688615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.903 ms 00:27:54.251 [2024-12-06 13:22:00.688903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.689174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.689475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:54.251 [2024-12-06 13:22:00.689660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:54.251 [2024-12-06 13:22:00.689792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.690018] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:54.251 [2024-12-06 13:22:00.690590] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:54.251 [2024-12-06 13:22:00.690914] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:54.251 [2024-12-06 13:22:00.691064] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:54.251 [2024-12-06 13:22:00.691333] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:54.251 [2024-12-06 13:22:00.691481] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:54.251 [2024-12-06 13:22:00.691644] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:54.251 [2024-12-06 13:22:00.691894] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:54.251 [2024-12-06 13:22:00.692047] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:54.251 [2024-12-06 13:22:00.692167] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:54.251 [2024-12-06 13:22:00.692410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.692544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:54.251 [2024-12-06 13:22:00.692670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.408 ms 00:27:54.251 [2024-12-06 13:22:00.692929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.693342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.251 [2024-12-06 13:22:00.693591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:54.251 [2024-12-06 13:22:00.693785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:27:54.251 [2024-12-06 13:22:00.693985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.251 [2024-12-06 13:22:00.694478] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:54.251 [2024-12-06 13:22:00.694632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:54.251 [2024-12-06 13:22:00.694740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:54.251 [2024-12-06 13:22:00.695008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.251 [2024-12-06 13:22:00.695176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:54.251 [2024-12-06 13:22:00.695295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:54.251 [2024-12-06 13:22:00.695413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:54.251 [2024-12-06 13:22:00.695668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:54.251 [2024-12-06 13:22:00.695816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:54.251 [2024-12-06 13:22:00.695975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:54.251 [2024-12-06 13:22:00.696234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:54.251 [2024-12-06 13:22:00.696378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:54.251 [2024-12-06 13:22:00.696505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:54.251 [2024-12-06 13:22:00.696743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:54.251 [2024-12-06 13:22:00.696919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:54.251 [2024-12-06 13:22:00.697036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.251 [2024-12-06 13:22:00.697152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:54.251 [2024-12-06 13:22:00.697184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:54.251 [2024-12-06 13:22:00.697208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.251 [2024-12-06 13:22:00.697228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:54.251 [2024-12-06 13:22:00.697250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:54.251 [2024-12-06 13:22:00.697269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.251 [2024-12-06 13:22:00.697291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:54.251 [2024-12-06 13:22:00.697309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:54.251 [2024-12-06 13:22:00.697329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.251 [2024-12-06 13:22:00.697347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:54.251 [2024-12-06 13:22:00.697369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:54.251 [2024-12-06 13:22:00.697387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.252 [2024-12-06 13:22:00.697409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:54.252 [2024-12-06 13:22:00.697428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:54.252 [2024-12-06 13:22:00.697449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.252 [2024-12-06 13:22:00.697469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:54.252 [2024-12-06 13:22:00.697497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:54.252 [2024-12-06 13:22:00.697517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:54.252 [2024-12-06 13:22:00.697539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:54.252 [2024-12-06 13:22:00.697560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:54.252 [2024-12-06 13:22:00.697585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:54.252 [2024-12-06 13:22:00.697607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:54.252 [2024-12-06 13:22:00.697631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:54.252 [2024-12-06 13:22:00.697650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.252 [2024-12-06 13:22:00.697673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:54.252 [2024-12-06 13:22:00.697692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:54.252 [2024-12-06 13:22:00.697714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.252 [2024-12-06 13:22:00.697731] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:54.252 [2024-12-06 13:22:00.697755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:54.252 [2024-12-06 13:22:00.697775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:54.252 [2024-12-06 13:22:00.697796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.252 [2024-12-06 13:22:00.697816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:54.252 [2024-12-06 13:22:00.699793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:54.252 [2024-12-06 13:22:00.699966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:54.252 [2024-12-06 13:22:00.700095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:54.252 [2024-12-06 13:22:00.700126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:54.252 [2024-12-06 13:22:00.700150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:54.252 [2024-12-06 13:22:00.700171] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:54.252 [2024-12-06 13:22:00.700197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:54.252 [2024-12-06 13:22:00.700225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:54.252 [2024-12-06 13:22:00.700250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:54.252 [2024-12-06 13:22:00.700271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:54.252 [2024-12-06 13:22:00.700294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:54.252 [2024-12-06 13:22:00.700315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:54.252 [2024-12-06 13:22:00.700339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:54.252 [2024-12-06 13:22:00.700360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:54.252 [2024-12-06 13:22:00.700386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:54.252 [2024-12-06 13:22:00.700408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:54.252 [2024-12-06 13:22:00.700434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:54.252 [2024-12-06 13:22:00.700455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:54.252 [2024-12-06 13:22:00.700480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:54.252 [2024-12-06 13:22:00.700503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:54.252 [2024-12-06 13:22:00.700527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:54.252 [2024-12-06 13:22:00.700547] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:54.252 [2024-12-06 13:22:00.700577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:54.252 [2024-12-06 13:22:00.700598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:54.252 [2024-12-06 13:22:00.700632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:54.252 [2024-12-06 13:22:00.700653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:54.252 [2024-12-06 13:22:00.700675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:54.252 [2024-12-06 13:22:00.700698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.252 [2024-12-06 13:22:00.700721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:54.252 [2024-12-06 13:22:00.700742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.362 ms 00:27:54.252 [2024-12-06 13:22:00.700763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.252 [2024-12-06 13:22:00.700994] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:54.252 [2024-12-06 13:22:00.701033] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:56.783 [2024-12-06 13:22:02.687391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.687483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:56.783 [2024-12-06 13:22:02.687536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1986.408 ms 00:27:56.783 [2024-12-06 13:22:02.687563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.724096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.724196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:56.783 [2024-12-06 13:22:02.724232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.115 ms 00:27:56.783 [2024-12-06 13:22:02.724260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.724521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.724570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:56.783 [2024-12-06 13:22:02.724624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:56.783 [2024-12-06 13:22:02.724654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.793491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.793604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:56.783 [2024-12-06 13:22:02.793643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.772 ms 00:27:56.783 [2024-12-06 13:22:02.793678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.793944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.793983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:56.783 [2024-12-06 13:22:02.794011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:56.783 [2024-12-06 13:22:02.794038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.794482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.794549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:56.783 [2024-12-06 13:22:02.794580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:27:56.783 [2024-12-06 13:22:02.794608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.794887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.794934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:56.783 [2024-12-06 13:22:02.794985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:27:56.783 [2024-12-06 13:22:02.795011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.817022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.817085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.783 [2024-12-06 13:22:02.817108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.957 ms 00:27:56.783 [2024-12-06 13:22:02.817122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.835109] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:56.783 [2024-12-06 13:22:02.851618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.851710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:56.783 [2024-12-06 13:22:02.851736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.292 ms 00:27:56.783 [2024-12-06 13:22:02.851749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.914763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.914908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:56.783 [2024-12-06 13:22:02.914956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.809 ms 00:27:56.783 [2024-12-06 13:22:02.914984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.915440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.915499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:56.783 [2024-12-06 13:22:02.915555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:27:56.783 [2024-12-06 13:22:02.915593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.959949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.960216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:56.783 [2024-12-06 13:22:02.960257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.277 ms 00:27:56.783 [2024-12-06 13:22:02.960282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.995808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.995888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:56.783 [2024-12-06 13:22:02.995914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.334 ms 00:27:56.783 [2024-12-06 13:22:02.995926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:02.996812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:02.996866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:56.783 [2024-12-06 13:22:02.996888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:27:56.783 [2024-12-06 13:22:02.996900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:03.081771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:03.081870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:56.783 [2024-12-06 13:22:03.081913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.813 ms 00:27:56.783 [2024-12-06 13:22:03.081928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:03.115399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:03.115466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:56.783 [2024-12-06 13:22:03.115498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.299 ms 00:27:56.783 [2024-12-06 13:22:03.115513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:03.148771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:03.148877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:56.783 [2024-12-06 13:22:03.148903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.130 ms 00:27:56.783 [2024-12-06 13:22:03.148916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:03.187361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:03.187455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:56.783 [2024-12-06 13:22:03.187481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.290 ms 00:27:56.783 [2024-12-06 13:22:03.187506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:03.187666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:03.187690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:56.783 [2024-12-06 13:22:03.187710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:56.783 [2024-12-06 13:22:03.187722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:03.187834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.783 [2024-12-06 13:22:03.187869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:56.783 [2024-12-06 13:22:03.187886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:56.783 [2024-12-06 13:22:03.187897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.783 [2024-12-06 13:22:03.189026] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:56.783 [2024-12-06 13:22:03.193627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2530.820 ms, result 0 00:27:56.783 [2024-12-06 13:22:03.194615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:56.783 { 00:27:56.783 "name": "ftl0", 00:27:56.783 "uuid": "c9c5936a-1bb5-432f-b1c3-6cf254b3be43" 00:27:56.783 } 00:27:56.783 13:22:03 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:27:56.783 13:22:03 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:27:56.783 13:22:03 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:56.783 13:22:03 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:27:56.783 13:22:03 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:56.783 13:22:03 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:56.783 13:22:03 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:57.042 13:22:03 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:57.608 [ 00:27:57.608 { 00:27:57.608 "name": "ftl0", 00:27:57.608 "aliases": [ 00:27:57.608 "c9c5936a-1bb5-432f-b1c3-6cf254b3be43" 00:27:57.608 ], 00:27:57.608 "product_name": "FTL disk", 00:27:57.608 "block_size": 4096, 00:27:57.608 "num_blocks": 23592960, 00:27:57.608 "uuid": "c9c5936a-1bb5-432f-b1c3-6cf254b3be43", 00:27:57.608 "assigned_rate_limits": { 00:27:57.608 "rw_ios_per_sec": 0, 00:27:57.608 "rw_mbytes_per_sec": 0, 00:27:57.608 "r_mbytes_per_sec": 0, 00:27:57.608 "w_mbytes_per_sec": 0 00:27:57.608 }, 00:27:57.608 "claimed": false, 00:27:57.608 "zoned": false, 00:27:57.608 "supported_io_types": { 00:27:57.608 "read": true, 00:27:57.608 "write": true, 00:27:57.608 "unmap": true, 00:27:57.608 "flush": true, 00:27:57.608 "reset": false, 00:27:57.608 "nvme_admin": false, 00:27:57.608 "nvme_io": false, 00:27:57.608 "nvme_io_md": false, 00:27:57.608 "write_zeroes": true, 00:27:57.608 "zcopy": false, 00:27:57.608 "get_zone_info": false, 00:27:57.608 "zone_management": false, 00:27:57.608 "zone_append": false, 00:27:57.608 "compare": false, 00:27:57.608 "compare_and_write": false, 00:27:57.608 "abort": false, 00:27:57.608 "seek_hole": false, 00:27:57.608 "seek_data": false, 00:27:57.608 "copy": false, 00:27:57.608 "nvme_iov_md": false 00:27:57.608 }, 00:27:57.609 "driver_specific": { 00:27:57.609 "ftl": { 00:27:57.609 "base_bdev": "b1f5cdc1-afb8-4e58-90d4-b89052df1885", 00:27:57.609 "cache": "nvc0n1p0" 00:27:57.609 } 00:27:57.609 } 00:27:57.609 } 00:27:57.609 ] 00:27:57.609 13:22:03 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:27:57.609 13:22:03 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:27:57.609 13:22:03 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:57.867 13:22:04 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:27:57.867 13:22:04 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:27:58.125 13:22:04 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:27:58.125 { 00:27:58.125 "name": "ftl0", 00:27:58.125 "aliases": [ 00:27:58.125 "c9c5936a-1bb5-432f-b1c3-6cf254b3be43" 00:27:58.125 ], 00:27:58.125 "product_name": "FTL disk", 00:27:58.125 "block_size": 4096, 00:27:58.125 "num_blocks": 23592960, 00:27:58.125 "uuid": "c9c5936a-1bb5-432f-b1c3-6cf254b3be43", 00:27:58.125 "assigned_rate_limits": { 00:27:58.125 "rw_ios_per_sec": 0, 00:27:58.125 "rw_mbytes_per_sec": 0, 00:27:58.125 "r_mbytes_per_sec": 0, 00:27:58.125 "w_mbytes_per_sec": 0 00:27:58.125 }, 00:27:58.125 "claimed": false, 00:27:58.125 "zoned": false, 00:27:58.125 "supported_io_types": { 00:27:58.125 "read": true, 00:27:58.125 "write": true, 00:27:58.125 "unmap": true, 00:27:58.125 "flush": true, 00:27:58.125 "reset": false, 00:27:58.125 "nvme_admin": false, 00:27:58.125 "nvme_io": false, 00:27:58.125 "nvme_io_md": false, 00:27:58.125 "write_zeroes": true, 00:27:58.125 "zcopy": false, 00:27:58.125 "get_zone_info": false, 00:27:58.125 "zone_management": false, 00:27:58.125 "zone_append": false, 00:27:58.125 "compare": false, 00:27:58.125 "compare_and_write": false, 00:27:58.125 "abort": false, 00:27:58.125 "seek_hole": false, 00:27:58.125 "seek_data": false, 00:27:58.125 "copy": false, 00:27:58.125 "nvme_iov_md": false 00:27:58.125 }, 00:27:58.125 "driver_specific": { 00:27:58.125 "ftl": { 00:27:58.125 "base_bdev": "b1f5cdc1-afb8-4e58-90d4-b89052df1885", 00:27:58.125 "cache": "nvc0n1p0" 00:27:58.125 } 00:27:58.125 } 00:27:58.125 } 00:27:58.125 ]' 00:27:58.125 13:22:04 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:27:58.125 13:22:04 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:27:58.125 13:22:04 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:58.384 [2024-12-06 13:22:04.888242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.384 [2024-12-06 13:22:04.888317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:58.384 [2024-12-06 13:22:04.888344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:58.384 [2024-12-06 13:22:04.888362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.384 [2024-12-06 13:22:04.888409] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:58.384 [2024-12-06 13:22:04.891812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.384 [2024-12-06 13:22:04.891874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:58.384 [2024-12-06 13:22:04.891901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:27:58.384 [2024-12-06 13:22:04.891914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.384 [2024-12-06 13:22:04.892547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.384 [2024-12-06 13:22:04.892587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:58.384 [2024-12-06 13:22:04.892607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:27:58.384 [2024-12-06 13:22:04.892620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.384 [2024-12-06 13:22:04.896367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.384 [2024-12-06 13:22:04.896408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:58.384 [2024-12-06 13:22:04.896426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.705 ms 00:27:58.384 [2024-12-06 13:22:04.896438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.384 [2024-12-06 13:22:04.904963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.384 [2024-12-06 13:22:04.905036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:58.384 [2024-12-06 13:22:04.905071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.437 ms 00:27:58.384 [2024-12-06 13:22:04.905095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.643 [2024-12-06 13:22:04.941193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.643 [2024-12-06 13:22:04.941261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:58.643 [2024-12-06 13:22:04.941288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.907 ms 00:27:58.643 [2024-12-06 13:22:04.941301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.644 [2024-12-06 13:22:04.961476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.644 [2024-12-06 13:22:04.961549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:58.644 [2024-12-06 13:22:04.961574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.034 ms 00:27:58.644 [2024-12-06 13:22:04.961591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.644 [2024-12-06 13:22:04.961948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.644 [2024-12-06 13:22:04.961976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:58.644 [2024-12-06 13:22:04.961994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:27:58.644 [2024-12-06 13:22:04.962006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.644 [2024-12-06 13:22:04.994909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.644 [2024-12-06 13:22:04.995190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:58.644 [2024-12-06 13:22:04.995229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.859 ms 00:27:58.644 [2024-12-06 13:22:04.995244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.644 [2024-12-06 13:22:05.030476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.644 [2024-12-06 13:22:05.030710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:58.644 [2024-12-06 13:22:05.030752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.082 ms 00:27:58.644 [2024-12-06 13:22:05.030767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.644 [2024-12-06 13:22:05.065739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.644 [2024-12-06 13:22:05.065993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:58.644 [2024-12-06 13:22:05.066032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.766 ms 00:27:58.644 [2024-12-06 13:22:05.066046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.644 [2024-12-06 13:22:05.097267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.644 [2024-12-06 13:22:05.097513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:58.644 [2024-12-06 13:22:05.097551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.025 ms 00:27:58.644 [2024-12-06 13:22:05.097565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.644 [2024-12-06 13:22:05.097699] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:58.644 [2024-12-06 13:22:05.097727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.097999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:58.644 [2024-12-06 13:22:05.098363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.098988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:58.645 [2024-12-06 13:22:05.099169] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:58.645 [2024-12-06 13:22:05.099195] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9c5936a-1bb5-432f-b1c3-6cf254b3be43 00:27:58.645 [2024-12-06 13:22:05.099207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:58.645 [2024-12-06 13:22:05.099226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:58.645 [2024-12-06 13:22:05.099238] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:58.645 [2024-12-06 13:22:05.099255] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:58.646 [2024-12-06 13:22:05.099266] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:58.646 [2024-12-06 13:22:05.099279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:58.646 [2024-12-06 13:22:05.099290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:58.646 [2024-12-06 13:22:05.099302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:58.646 [2024-12-06 13:22:05.099312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:58.646 [2024-12-06 13:22:05.099326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.646 [2024-12-06 13:22:05.099338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:58.646 [2024-12-06 13:22:05.099353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:27:58.646 [2024-12-06 13:22:05.099365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.646 [2024-12-06 13:22:05.116299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.646 [2024-12-06 13:22:05.116503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:58.646 [2024-12-06 13:22:05.116545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.881 ms 00:27:58.646 [2024-12-06 13:22:05.116578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.646 [2024-12-06 13:22:05.117257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.646 [2024-12-06 13:22:05.117290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:58.646 [2024-12-06 13:22:05.117319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:27:58.646 [2024-12-06 13:22:05.117331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.176795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.176881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.905 [2024-12-06 13:22:05.176906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.176919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.177081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.177102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.905 [2024-12-06 13:22:05.177117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.177129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.177222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.177242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.905 [2024-12-06 13:22:05.177262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.177274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.177312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.177326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.905 [2024-12-06 13:22:05.177340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.177351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.288697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.288772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.905 [2024-12-06 13:22:05.288796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.288809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.375255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.375557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.905 [2024-12-06 13:22:05.375597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.375612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.375784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.375805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:58.905 [2024-12-06 13:22:05.375824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.375867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.375941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.375957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:58.905 [2024-12-06 13:22:05.375972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.375984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.376150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.376171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:58.905 [2024-12-06 13:22:05.376186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.376201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.376285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.376305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:58.905 [2024-12-06 13:22:05.376320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.376331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.376395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.376412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:58.905 [2024-12-06 13:22:05.376429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.376440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.376515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.905 [2024-12-06 13:22:05.376532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:58.905 [2024-12-06 13:22:05.376546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.905 [2024-12-06 13:22:05.376557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.905 [2024-12-06 13:22:05.376777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 488.523 ms, result 0 00:27:58.905 true 00:27:58.905 13:22:05 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78588 00:27:58.905 13:22:05 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78588 ']' 00:27:58.905 13:22:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78588 00:27:58.905 13:22:05 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:58.905 13:22:05 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.905 13:22:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78588 00:27:59.164 13:22:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:59.164 killing process with pid 78588 00:27:59.164 13:22:05 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:59.164 13:22:05 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78588' 00:27:59.164 13:22:05 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78588 00:27:59.164 13:22:05 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78588 00:28:04.464 13:22:10 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:28:05.029 65536+0 records in 00:28:05.029 65536+0 records out 00:28:05.029 268435456 bytes (268 MB, 256 MiB) copied, 1.19276 s, 225 MB/s 00:28:05.029 13:22:11 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:05.029 [2024-12-06 13:22:11.396191] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:28:05.029 [2024-12-06 13:22:11.396634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78803 ] 00:28:05.287 [2024-12-06 13:22:11.649426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.287 [2024-12-06 13:22:11.760989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.854 [2024-12-06 13:22:12.109326] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:05.854 [2024-12-06 13:22:12.109416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:05.854 [2024-12-06 13:22:12.284563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.854 [2024-12-06 13:22:12.284656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:05.854 [2024-12-06 13:22:12.284688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:05.854 [2024-12-06 13:22:12.284707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.854 [2024-12-06 13:22:12.291683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.854 [2024-12-06 13:22:12.291744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:05.854 [2024-12-06 13:22:12.291773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.911 ms 00:28:05.854 [2024-12-06 13:22:12.291793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.854 [2024-12-06 13:22:12.292158] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:05.854 [2024-12-06 13:22:12.293813] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:05.854 [2024-12-06 13:22:12.293891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.854 [2024-12-06 13:22:12.293916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:05.854 [2024-12-06 13:22:12.293937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:28:05.854 [2024-12-06 13:22:12.293955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.854 [2024-12-06 13:22:12.295640] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:05.854 [2024-12-06 13:22:12.318821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.854 [2024-12-06 13:22:12.318903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:05.854 [2024-12-06 13:22:12.318933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.182 ms 00:28:05.854 [2024-12-06 13:22:12.318954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.854 [2024-12-06 13:22:12.319176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.854 [2024-12-06 13:22:12.319208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:05.854 [2024-12-06 13:22:12.319229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:05.854 [2024-12-06 13:22:12.319249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.854 [2024-12-06 13:22:12.324484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.854 [2024-12-06 13:22:12.324553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:05.854 [2024-12-06 13:22:12.324580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.128 ms 00:28:05.854 [2024-12-06 13:22:12.324600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.854 [2024-12-06 13:22:12.324827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.854 [2024-12-06 13:22:12.324901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:05.854 [2024-12-06 13:22:12.324926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:28:05.854 [2024-12-06 13:22:12.324945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.854 [2024-12-06 13:22:12.325027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.854 [2024-12-06 13:22:12.325051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:05.854 [2024-12-06 13:22:12.325071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:05.854 [2024-12-06 13:22:12.325089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.854 [2024-12-06 13:22:12.325159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:05.854 [2024-12-06 13:22:12.331953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.854 [2024-12-06 13:22:12.332007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:05.854 [2024-12-06 13:22:12.332032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.807 ms 00:28:05.854 [2024-12-06 13:22:12.332052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.854 [2024-12-06 13:22:12.332233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.855 [2024-12-06 13:22:12.332264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:05.855 [2024-12-06 13:22:12.332286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:05.855 [2024-12-06 13:22:12.332304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.855 [2024-12-06 13:22:12.332369] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:05.855 [2024-12-06 13:22:12.332415] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:05.855 [2024-12-06 13:22:12.332499] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:05.855 [2024-12-06 13:22:12.332537] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:05.855 [2024-12-06 13:22:12.332876] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:05.855 [2024-12-06 13:22:12.332915] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:05.855 [2024-12-06 13:22:12.332943] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:05.855 [2024-12-06 13:22:12.332973] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:05.855 [2024-12-06 13:22:12.332996] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:05.855 [2024-12-06 13:22:12.333015] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:05.855 [2024-12-06 13:22:12.333033] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:05.855 [2024-12-06 13:22:12.333051] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:05.855 [2024-12-06 13:22:12.333068] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:05.855 [2024-12-06 13:22:12.333089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.855 [2024-12-06 13:22:12.333108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:05.855 [2024-12-06 13:22:12.333127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:28:05.855 [2024-12-06 13:22:12.333145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.855 [2024-12-06 13:22:12.333412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.855 [2024-12-06 13:22:12.333446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:05.855 [2024-12-06 13:22:12.333466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:28:05.855 [2024-12-06 13:22:12.333484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.855 [2024-12-06 13:22:12.333782] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:05.855 [2024-12-06 13:22:12.333810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:05.855 [2024-12-06 13:22:12.333832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:05.855 [2024-12-06 13:22:12.333871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.855 [2024-12-06 13:22:12.333893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:05.855 [2024-12-06 13:22:12.333911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:05.855 [2024-12-06 13:22:12.333929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:05.855 [2024-12-06 13:22:12.333956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:05.855 [2024-12-06 13:22:12.333974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:05.855 [2024-12-06 13:22:12.333991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:05.855 [2024-12-06 13:22:12.334009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:05.855 [2024-12-06 13:22:12.334043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:05.855 [2024-12-06 13:22:12.334062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:05.855 [2024-12-06 13:22:12.334081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:05.855 [2024-12-06 13:22:12.334099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:05.855 [2024-12-06 13:22:12.334117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.855 [2024-12-06 13:22:12.334135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:05.855 [2024-12-06 13:22:12.334153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:05.855 [2024-12-06 13:22:12.334170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.855 [2024-12-06 13:22:12.334189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:05.855 [2024-12-06 13:22:12.334207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:05.855 [2024-12-06 13:22:12.334224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.855 [2024-12-06 13:22:12.334241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:05.855 [2024-12-06 13:22:12.334261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:05.855 [2024-12-06 13:22:12.334279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.855 [2024-12-06 13:22:12.334299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:05.855 [2024-12-06 13:22:12.334316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:05.855 [2024-12-06 13:22:12.334334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.855 [2024-12-06 13:22:12.334352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:05.855 [2024-12-06 13:22:12.334369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:05.855 [2024-12-06 13:22:12.334387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.855 [2024-12-06 13:22:12.334404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:05.855 [2024-12-06 13:22:12.334422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:05.855 [2024-12-06 13:22:12.334440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:05.855 [2024-12-06 13:22:12.334457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:05.855 [2024-12-06 13:22:12.334476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:05.855 [2024-12-06 13:22:12.334494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:05.855 [2024-12-06 13:22:12.334514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:05.855 [2024-12-06 13:22:12.334532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:05.855 [2024-12-06 13:22:12.334550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.855 [2024-12-06 13:22:12.334568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:05.855 [2024-12-06 13:22:12.334585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:05.855 [2024-12-06 13:22:12.334603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.855 [2024-12-06 13:22:12.334620] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:05.855 [2024-12-06 13:22:12.334640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:05.855 [2024-12-06 13:22:12.334666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:05.855 [2024-12-06 13:22:12.334685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.855 [2024-12-06 13:22:12.334707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:05.855 [2024-12-06 13:22:12.334726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:05.855 [2024-12-06 13:22:12.334744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:05.855 [2024-12-06 13:22:12.334765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:05.855 [2024-12-06 13:22:12.334783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:05.855 [2024-12-06 13:22:12.334801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:05.855 [2024-12-06 13:22:12.334825] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:05.855 [2024-12-06 13:22:12.334866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.855 [2024-12-06 13:22:12.334889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:05.855 [2024-12-06 13:22:12.334909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:05.855 [2024-12-06 13:22:12.334927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:05.855 [2024-12-06 13:22:12.334947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:05.855 [2024-12-06 13:22:12.334966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:05.855 [2024-12-06 13:22:12.334986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:05.855 [2024-12-06 13:22:12.335004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:05.855 [2024-12-06 13:22:12.335023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:05.855 [2024-12-06 13:22:12.335041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:05.855 [2024-12-06 13:22:12.335060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:05.855 [2024-12-06 13:22:12.335080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:05.855 [2024-12-06 13:22:12.335100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:05.855 [2024-12-06 13:22:12.335119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:05.855 [2024-12-06 13:22:12.335137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:05.855 [2024-12-06 13:22:12.335157] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:05.855 [2024-12-06 13:22:12.335177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.855 [2024-12-06 13:22:12.335198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:05.855 [2024-12-06 13:22:12.335218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:05.856 [2024-12-06 13:22:12.335237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:05.856 [2024-12-06 13:22:12.335260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:05.856 [2024-12-06 13:22:12.335283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.856 [2024-12-06 13:22:12.335310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:05.856 [2024-12-06 13:22:12.335336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:28:05.856 [2024-12-06 13:22:12.335354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.382051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.382142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:06.114 [2024-12-06 13:22:12.382175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.510 ms 00:28:06.114 [2024-12-06 13:22:12.382196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.382623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.382665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:06.114 [2024-12-06 13:22:12.382689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:28:06.114 [2024-12-06 13:22:12.382708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.442653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.442743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:06.114 [2024-12-06 13:22:12.442765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.881 ms 00:28:06.114 [2024-12-06 13:22:12.442778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.442971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.442992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:06.114 [2024-12-06 13:22:12.443005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:06.114 [2024-12-06 13:22:12.443017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.443385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.443404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:06.114 [2024-12-06 13:22:12.443426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:28:06.114 [2024-12-06 13:22:12.443438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.443617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.443636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:06.114 [2024-12-06 13:22:12.443648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:28:06.114 [2024-12-06 13:22:12.443659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.461212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.461290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:06.114 [2024-12-06 13:22:12.461310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.519 ms 00:28:06.114 [2024-12-06 13:22:12.461323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.478923] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:06.114 [2024-12-06 13:22:12.479022] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:06.114 [2024-12-06 13:22:12.479046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.479060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:06.114 [2024-12-06 13:22:12.479077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.516 ms 00:28:06.114 [2024-12-06 13:22:12.479088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.515411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.515559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:06.114 [2024-12-06 13:22:12.515596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.121 ms 00:28:06.114 [2024-12-06 13:22:12.515616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.534004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.534138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:06.114 [2024-12-06 13:22:12.534171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.079 ms 00:28:06.114 [2024-12-06 13:22:12.534184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.551097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.551200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:06.114 [2024-12-06 13:22:12.551223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.676 ms 00:28:06.114 [2024-12-06 13:22:12.551235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.552205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.552242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:06.114 [2024-12-06 13:22:12.552258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:28:06.114 [2024-12-06 13:22:12.552270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.626715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.114 [2024-12-06 13:22:12.626791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:06.114 [2024-12-06 13:22:12.626812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.407 ms 00:28:06.114 [2024-12-06 13:22:12.626824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.114 [2024-12-06 13:22:12.639811] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:06.372 [2024-12-06 13:22:12.653927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.372 [2024-12-06 13:22:12.654009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:06.372 [2024-12-06 13:22:12.654029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.889 ms 00:28:06.372 [2024-12-06 13:22:12.654043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.372 [2024-12-06 13:22:12.654204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.372 [2024-12-06 13:22:12.654225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:06.372 [2024-12-06 13:22:12.654239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:06.372 [2024-12-06 13:22:12.654250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.372 [2024-12-06 13:22:12.654330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.372 [2024-12-06 13:22:12.654348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:06.372 [2024-12-06 13:22:12.654361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:06.372 [2024-12-06 13:22:12.654372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.372 [2024-12-06 13:22:12.654421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.372 [2024-12-06 13:22:12.654443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:06.372 [2024-12-06 13:22:12.654455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:06.372 [2024-12-06 13:22:12.654466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.372 [2024-12-06 13:22:12.654512] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:06.372 [2024-12-06 13:22:12.654528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.372 [2024-12-06 13:22:12.654539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:06.372 [2024-12-06 13:22:12.654551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:06.372 [2024-12-06 13:22:12.654562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.372 [2024-12-06 13:22:12.686179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.372 [2024-12-06 13:22:12.686238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:06.372 [2024-12-06 13:22:12.686257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.588 ms 00:28:06.372 [2024-12-06 13:22:12.686270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.372 [2024-12-06 13:22:12.686411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.372 [2024-12-06 13:22:12.686432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:06.372 [2024-12-06 13:22:12.686446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:06.372 [2024-12-06 13:22:12.686457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.372 [2024-12-06 13:22:12.687450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:06.372 [2024-12-06 13:22:12.691626] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 402.628 ms, result 0 00:28:06.372 [2024-12-06 13:22:12.692443] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:06.372 [2024-12-06 13:22:12.709054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:07.306  [2024-12-06T13:22:14.769Z] Copying: 30/256 [MB] (30 MBps) [2024-12-06T13:22:16.141Z] Copying: 60/256 [MB] (30 MBps) [2024-12-06T13:22:17.074Z] Copying: 86/256 [MB] (25 MBps) [2024-12-06T13:22:18.009Z] Copying: 110/256 [MB] (24 MBps) [2024-12-06T13:22:18.943Z] Copying: 136/256 [MB] (25 MBps) [2024-12-06T13:22:19.878Z] Copying: 164/256 [MB] (28 MBps) [2024-12-06T13:22:20.813Z] Copying: 189/256 [MB] (25 MBps) [2024-12-06T13:22:21.747Z] Copying: 217/256 [MB] (27 MBps) [2024-12-06T13:22:22.317Z] Copying: 245/256 [MB] (27 MBps) [2024-12-06T13:22:22.318Z] Copying: 256/256 [MB] (average 27 MBps)[2024-12-06 13:22:22.165029] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:15.790 [2024-12-06 13:22:22.183289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.790 [2024-12-06 13:22:22.183369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:15.790 [2024-12-06 13:22:22.183404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:15.790 [2024-12-06 13:22:22.183438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.790 [2024-12-06 13:22:22.183492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:15.790 [2024-12-06 13:22:22.188428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.790 [2024-12-06 13:22:22.188498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:15.790 [2024-12-06 13:22:22.188527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:28:15.790 [2024-12-06 13:22:22.188549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.790 [2024-12-06 13:22:22.190412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.790 [2024-12-06 13:22:22.190476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:15.790 [2024-12-06 13:22:22.190505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.812 ms 00:28:15.790 [2024-12-06 13:22:22.190527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.790 [2024-12-06 13:22:22.198105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.790 [2024-12-06 13:22:22.198172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:15.790 [2024-12-06 13:22:22.198190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.536 ms 00:28:15.790 [2024-12-06 13:22:22.198203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.790 [2024-12-06 13:22:22.205739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.790 [2024-12-06 13:22:22.205778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:15.790 [2024-12-06 13:22:22.205793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.480 ms 00:28:15.790 [2024-12-06 13:22:22.205805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.790 [2024-12-06 13:22:22.237718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.790 [2024-12-06 13:22:22.237792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:15.790 [2024-12-06 13:22:22.237812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.818 ms 00:28:15.790 [2024-12-06 13:22:22.237824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.790 [2024-12-06 13:22:22.255693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.790 [2024-12-06 13:22:22.255765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:15.790 [2024-12-06 13:22:22.255790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.725 ms 00:28:15.790 [2024-12-06 13:22:22.255802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.790 [2024-12-06 13:22:22.256023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.790 [2024-12-06 13:22:22.256047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:15.790 [2024-12-06 13:22:22.256061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:28:15.790 [2024-12-06 13:22:22.256086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.790 [2024-12-06 13:22:22.287478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.790 [2024-12-06 13:22:22.287549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:15.790 [2024-12-06 13:22:22.287570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.365 ms 00:28:15.790 [2024-12-06 13:22:22.287582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.050 [2024-12-06 13:22:22.318837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.050 [2024-12-06 13:22:22.318924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:16.050 [2024-12-06 13:22:22.318944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.170 ms 00:28:16.050 [2024-12-06 13:22:22.318957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.050 [2024-12-06 13:22:22.349900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.050 [2024-12-06 13:22:22.349963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:16.050 [2024-12-06 13:22:22.349982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.844 ms 00:28:16.050 [2024-12-06 13:22:22.349994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.050 [2024-12-06 13:22:22.380912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.050 [2024-12-06 13:22:22.380976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:16.050 [2024-12-06 13:22:22.380995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.793 ms 00:28:16.050 [2024-12-06 13:22:22.381007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.050 [2024-12-06 13:22:22.381087] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:16.050 [2024-12-06 13:22:22.381113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:16.050 [2024-12-06 13:22:22.381507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.381988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:16.051 [2024-12-06 13:22:22.382328] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:16.051 [2024-12-06 13:22:22.382339] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9c5936a-1bb5-432f-b1c3-6cf254b3be43 00:28:16.051 [2024-12-06 13:22:22.382352] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:16.051 [2024-12-06 13:22:22.382363] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:16.051 [2024-12-06 13:22:22.382373] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:16.051 [2024-12-06 13:22:22.382385] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:16.051 [2024-12-06 13:22:22.382395] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:16.051 [2024-12-06 13:22:22.382406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:16.051 [2024-12-06 13:22:22.382418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:16.051 [2024-12-06 13:22:22.382428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:16.051 [2024-12-06 13:22:22.382438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:16.051 [2024-12-06 13:22:22.382449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.051 [2024-12-06 13:22:22.382466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:16.051 [2024-12-06 13:22:22.382479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:28:16.051 [2024-12-06 13:22:22.382490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.051 [2024-12-06 13:22:22.399135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.051 [2024-12-06 13:22:22.399195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:16.051 [2024-12-06 13:22:22.399214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.615 ms 00:28:16.051 [2024-12-06 13:22:22.399226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.051 [2024-12-06 13:22:22.399718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.051 [2024-12-06 13:22:22.399737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:16.051 [2024-12-06 13:22:22.399750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:28:16.051 [2024-12-06 13:22:22.399761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.051 [2024-12-06 13:22:22.446013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.051 [2024-12-06 13:22:22.446088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:16.051 [2024-12-06 13:22:22.446107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.051 [2024-12-06 13:22:22.446120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.051 [2024-12-06 13:22:22.446282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.051 [2024-12-06 13:22:22.446301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:16.052 [2024-12-06 13:22:22.446313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.052 [2024-12-06 13:22:22.446324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.052 [2024-12-06 13:22:22.446404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.052 [2024-12-06 13:22:22.446424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:16.052 [2024-12-06 13:22:22.446436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.052 [2024-12-06 13:22:22.446448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.052 [2024-12-06 13:22:22.446473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.052 [2024-12-06 13:22:22.446494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:16.052 [2024-12-06 13:22:22.446506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.052 [2024-12-06 13:22:22.446517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.052 [2024-12-06 13:22:22.550531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.052 [2024-12-06 13:22:22.550609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:16.052 [2024-12-06 13:22:22.550629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.052 [2024-12-06 13:22:22.550641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.310 [2024-12-06 13:22:22.636250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.310 [2024-12-06 13:22:22.636328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:16.310 [2024-12-06 13:22:22.636347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.310 [2024-12-06 13:22:22.636359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.310 [2024-12-06 13:22:22.636449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.310 [2024-12-06 13:22:22.636467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:16.310 [2024-12-06 13:22:22.636479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.311 [2024-12-06 13:22:22.636490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.311 [2024-12-06 13:22:22.636524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.311 [2024-12-06 13:22:22.636537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:16.311 [2024-12-06 13:22:22.636560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.311 [2024-12-06 13:22:22.636572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.311 [2024-12-06 13:22:22.636699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.311 [2024-12-06 13:22:22.636719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:16.311 [2024-12-06 13:22:22.636731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.311 [2024-12-06 13:22:22.636742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.311 [2024-12-06 13:22:22.636795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.311 [2024-12-06 13:22:22.636813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:16.311 [2024-12-06 13:22:22.636825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.311 [2024-12-06 13:22:22.636869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.311 [2024-12-06 13:22:22.636921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.311 [2024-12-06 13:22:22.636938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:16.311 [2024-12-06 13:22:22.636950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.311 [2024-12-06 13:22:22.636962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.311 [2024-12-06 13:22:22.637015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.311 [2024-12-06 13:22:22.637032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:16.311 [2024-12-06 13:22:22.637051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.311 [2024-12-06 13:22:22.637062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.311 [2024-12-06 13:22:22.637228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 453.966 ms, result 0 00:28:17.245 00:28:17.245 00:28:17.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.245 13:22:23 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78930 00:28:17.245 13:22:23 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:28:17.245 13:22:23 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78930 00:28:17.245 13:22:23 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78930 ']' 00:28:17.245 13:22:23 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.245 13:22:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.245 13:22:23 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.245 13:22:23 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.245 13:22:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:17.502 [2024-12-06 13:22:23.898824] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:28:17.502 [2024-12-06 13:22:23.898991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78930 ] 00:28:17.759 [2024-12-06 13:22:24.072478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.759 [2024-12-06 13:22:24.192480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.688 13:22:25 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.688 13:22:25 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:28:18.688 13:22:25 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:28:18.945 [2024-12-06 13:22:25.343514] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:18.945 [2024-12-06 13:22:25.343600] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:19.203 [2024-12-06 13:22:25.519209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.519277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:19.203 [2024-12-06 13:22:25.519304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:19.203 [2024-12-06 13:22:25.519319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.523270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.523318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.203 [2024-12-06 13:22:25.523339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.922 ms 00:28:19.203 [2024-12-06 13:22:25.523353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.523572] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:19.203 [2024-12-06 13:22:25.524533] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:19.203 [2024-12-06 13:22:25.524578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.524593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.203 [2024-12-06 13:22:25.524609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:28:19.203 [2024-12-06 13:22:25.524621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.525875] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:19.203 [2024-12-06 13:22:25.549056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.549170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:19.203 [2024-12-06 13:22:25.549208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.182 ms 00:28:19.203 [2024-12-06 13:22:25.549246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.549531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.549602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:19.203 [2024-12-06 13:22:25.549632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:19.203 [2024-12-06 13:22:25.549665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.554603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.554689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.203 [2024-12-06 13:22:25.554712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.806 ms 00:28:19.203 [2024-12-06 13:22:25.554732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.554970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.555013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.203 [2024-12-06 13:22:25.555033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:28:19.203 [2024-12-06 13:22:25.555060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.555106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.555131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:19.203 [2024-12-06 13:22:25.555146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:19.203 [2024-12-06 13:22:25.555163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.555201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:19.203 [2024-12-06 13:22:25.559523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.559562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.203 [2024-12-06 13:22:25.559586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.326 ms 00:28:19.203 [2024-12-06 13:22:25.559600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.559714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.559743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:19.203 [2024-12-06 13:22:25.559764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:19.203 [2024-12-06 13:22:25.559780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.559814] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:19.203 [2024-12-06 13:22:25.559856] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:19.203 [2024-12-06 13:22:25.559916] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:19.203 [2024-12-06 13:22:25.559942] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:19.203 [2024-12-06 13:22:25.560060] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:19.203 [2024-12-06 13:22:25.560087] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:19.203 [2024-12-06 13:22:25.560111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:19.203 [2024-12-06 13:22:25.560127] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:19.203 [2024-12-06 13:22:25.560143] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:19.203 [2024-12-06 13:22:25.560156] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:19.203 [2024-12-06 13:22:25.560170] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:19.203 [2024-12-06 13:22:25.560181] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:19.203 [2024-12-06 13:22:25.560196] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:19.203 [2024-12-06 13:22:25.560210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.560234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:19.203 [2024-12-06 13:22:25.560247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:28:19.203 [2024-12-06 13:22:25.560261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.560368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-12-06 13:22:25.560388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:19.203 [2024-12-06 13:22:25.560401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:19.203 [2024-12-06 13:22:25.560415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-12-06 13:22:25.560529] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:19.203 [2024-12-06 13:22:25.560553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:19.203 [2024-12-06 13:22:25.560568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.203 [2024-12-06 13:22:25.560584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-12-06 13:22:25.560597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:19.203 [2024-12-06 13:22:25.560610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:19.203 [2024-12-06 13:22:25.560621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:19.203 [2024-12-06 13:22:25.560637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:19.203 [2024-12-06 13:22:25.560648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:19.203 [2024-12-06 13:22:25.560661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.203 [2024-12-06 13:22:25.560672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:19.203 [2024-12-06 13:22:25.560694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:19.203 [2024-12-06 13:22:25.560705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.203 [2024-12-06 13:22:25.560718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:19.203 [2024-12-06 13:22:25.560729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:19.203 [2024-12-06 13:22:25.560742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-12-06 13:22:25.560753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:19.203 [2024-12-06 13:22:25.560766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:19.203 [2024-12-06 13:22:25.560789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-12-06 13:22:25.560804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:19.203 [2024-12-06 13:22:25.560815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:19.203 [2024-12-06 13:22:25.560828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.203 [2024-12-06 13:22:25.560853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:19.203 [2024-12-06 13:22:25.560881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:19.203 [2024-12-06 13:22:25.560893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.203 [2024-12-06 13:22:25.560910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:19.203 [2024-12-06 13:22:25.560922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:19.203 [2024-12-06 13:22:25.560941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.203 [2024-12-06 13:22:25.560953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:19.203 [2024-12-06 13:22:25.560970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:19.203 [2024-12-06 13:22:25.560981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.203 [2024-12-06 13:22:25.560998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:19.203 [2024-12-06 13:22:25.561010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:19.203 [2024-12-06 13:22:25.561027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.203 [2024-12-06 13:22:25.561040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:19.203 [2024-12-06 13:22:25.561056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:19.203 [2024-12-06 13:22:25.561068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.203 [2024-12-06 13:22:25.561085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:19.203 [2024-12-06 13:22:25.561097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:19.203 [2024-12-06 13:22:25.561117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-12-06 13:22:25.561129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:19.203 [2024-12-06 13:22:25.561145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:19.203 [2024-12-06 13:22:25.561157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-12-06 13:22:25.561174] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:19.203 [2024-12-06 13:22:25.561192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:19.203 [2024-12-06 13:22:25.561209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.203 [2024-12-06 13:22:25.561222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-12-06 13:22:25.561239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:19.203 [2024-12-06 13:22:25.561252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:19.203 [2024-12-06 13:22:25.561265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:19.203 [2024-12-06 13:22:25.561277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:19.203 [2024-12-06 13:22:25.561290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:19.203 [2024-12-06 13:22:25.561302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:19.203 [2024-12-06 13:22:25.561318] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:19.203 [2024-12-06 13:22:25.561333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.203 [2024-12-06 13:22:25.561350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:19.203 [2024-12-06 13:22:25.561362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:19.203 [2024-12-06 13:22:25.561376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:19.203 [2024-12-06 13:22:25.561388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:19.203 [2024-12-06 13:22:25.561403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:19.203 [2024-12-06 13:22:25.561415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:19.203 [2024-12-06 13:22:25.561428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:19.203 [2024-12-06 13:22:25.561441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:19.203 [2024-12-06 13:22:25.561454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:19.203 [2024-12-06 13:22:25.561466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:19.203 [2024-12-06 13:22:25.561490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:19.203 [2024-12-06 13:22:25.561502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:19.203 [2024-12-06 13:22:25.561516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:19.203 [2024-12-06 13:22:25.561529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:19.203 [2024-12-06 13:22:25.561543] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:19.203 [2024-12-06 13:22:25.561556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.203 [2024-12-06 13:22:25.561573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:19.203 [2024-12-06 13:22:25.561585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:19.203 [2024-12-06 13:22:25.561599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:19.203 [2024-12-06 13:22:25.561610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:19.204 [2024-12-06 13:22:25.561626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-12-06 13:22:25.561638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:19.204 [2024-12-06 13:22:25.561657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:28:19.204 [2024-12-06 13:22:25.561689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-12-06 13:22:25.596240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-12-06 13:22:25.596307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.204 [2024-12-06 13:22:25.596335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.463 ms 00:28:19.204 [2024-12-06 13:22:25.596356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-12-06 13:22:25.596559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-12-06 13:22:25.596580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:19.204 [2024-12-06 13:22:25.596601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:19.204 [2024-12-06 13:22:25.596614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-12-06 13:22:25.639559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-12-06 13:22:25.639633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.204 [2024-12-06 13:22:25.639669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.902 ms 00:28:19.204 [2024-12-06 13:22:25.639683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-12-06 13:22:25.639856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-12-06 13:22:25.639878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.204 [2024-12-06 13:22:25.639900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:19.204 [2024-12-06 13:22:25.639914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-12-06 13:22:25.640247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-12-06 13:22:25.640285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.204 [2024-12-06 13:22:25.640306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:28:19.204 [2024-12-06 13:22:25.640319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-12-06 13:22:25.640486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-12-06 13:22:25.640505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.204 [2024-12-06 13:22:25.640524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:28:19.204 [2024-12-06 13:22:25.640537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-12-06 13:22:25.659735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-12-06 13:22:25.659801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.204 [2024-12-06 13:22:25.659828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.157 ms 00:28:19.204 [2024-12-06 13:22:25.659856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-12-06 13:22:25.691601] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:19.204 [2024-12-06 13:22:25.691691] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:19.204 [2024-12-06 13:22:25.691721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-12-06 13:22:25.691736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:19.204 [2024-12-06 13:22:25.691756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.663 ms 00:28:19.204 [2024-12-06 13:22:25.691781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-12-06 13:22:25.721876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-12-06 13:22:25.721951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:19.204 [2024-12-06 13:22:25.721975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.857 ms 00:28:19.204 [2024-12-06 13:22:25.721989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.738395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.738458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:19.462 [2024-12-06 13:22:25.738484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.243 ms 00:28:19.462 [2024-12-06 13:22:25.738497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.754240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.754312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:19.462 [2024-12-06 13:22:25.754345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.608 ms 00:28:19.462 [2024-12-06 13:22:25.754358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.755291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.755330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:19.462 [2024-12-06 13:22:25.755353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:28:19.462 [2024-12-06 13:22:25.755368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.830530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.830597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:19.462 [2024-12-06 13:22:25.830627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.116 ms 00:28:19.462 [2024-12-06 13:22:25.830642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.843832] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:19.462 [2024-12-06 13:22:25.857967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.858056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:19.462 [2024-12-06 13:22:25.858084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.087 ms 00:28:19.462 [2024-12-06 13:22:25.858103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.858275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.858302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:19.462 [2024-12-06 13:22:25.858318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:19.462 [2024-12-06 13:22:25.858336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.858403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.858426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:19.462 [2024-12-06 13:22:25.858441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:19.462 [2024-12-06 13:22:25.858465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.858498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.858533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:19.462 [2024-12-06 13:22:25.858547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:19.462 [2024-12-06 13:22:25.858564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.858612] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:19.462 [2024-12-06 13:22:25.858642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.858665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:19.462 [2024-12-06 13:22:25.858684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:19.462 [2024-12-06 13:22:25.858696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.891220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.891318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:19.462 [2024-12-06 13:22:25.891349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.453 ms 00:28:19.462 [2024-12-06 13:22:25.891365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.891631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.462 [2024-12-06 13:22:25.891663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:19.462 [2024-12-06 13:22:25.891686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:19.462 [2024-12-06 13:22:25.891709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.462 [2024-12-06 13:22:25.892887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:19.462 [2024-12-06 13:22:25.897438] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.271 ms, result 0 00:28:19.462 [2024-12-06 13:22:25.898561] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:19.462 Some configs were skipped because the RPC state that can call them passed over. 00:28:19.462 13:22:25 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:28:20.027 [2024-12-06 13:22:26.277397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.027 [2024-12-06 13:22:26.277482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:20.027 [2024-12-06 13:22:26.277507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:28:20.027 [2024-12-06 13:22:26.277528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.027 [2024-12-06 13:22:26.277583] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.630 ms, result 0 00:28:20.027 true 00:28:20.027 13:22:26 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:28:20.027 [2024-12-06 13:22:26.553313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.027 [2024-12-06 13:22:26.553382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:20.027 [2024-12-06 13:22:26.553411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:28:20.027 [2024-12-06 13:22:26.553426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.027 [2024-12-06 13:22:26.553490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.137 ms, result 0 00:28:20.284 true 00:28:20.284 13:22:26 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78930 00:28:20.284 13:22:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78930 ']' 00:28:20.284 13:22:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78930 00:28:20.284 13:22:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:28:20.284 13:22:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.284 13:22:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78930 00:28:20.284 13:22:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:20.284 13:22:26 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:20.284 13:22:26 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78930' 00:28:20.284 killing process with pid 78930 00:28:20.284 13:22:26 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78930 00:28:20.284 13:22:26 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78930 00:28:21.217 [2024-12-06 13:22:27.642483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.217 [2024-12-06 13:22:27.642572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:21.217 [2024-12-06 13:22:27.642595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:21.217 [2024-12-06 13:22:27.642610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.217 [2024-12-06 13:22:27.642647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:21.217 [2024-12-06 13:22:27.646120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.217 [2024-12-06 13:22:27.646183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:21.217 [2024-12-06 13:22:27.646209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.433 ms 00:28:21.217 [2024-12-06 13:22:27.646222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.217 [2024-12-06 13:22:27.646590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.217 [2024-12-06 13:22:27.646625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:21.217 [2024-12-06 13:22:27.646643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:28:21.217 [2024-12-06 13:22:27.646656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.217 [2024-12-06 13:22:27.650894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.217 [2024-12-06 13:22:27.650976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:21.217 [2024-12-06 13:22:27.651004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:28:21.217 [2024-12-06 13:22:27.651016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.217 [2024-12-06 13:22:27.659771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.217 [2024-12-06 13:22:27.659875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:21.217 [2024-12-06 13:22:27.659905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.678 ms 00:28:21.217 [2024-12-06 13:22:27.659919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.217 [2024-12-06 13:22:27.672854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.217 [2024-12-06 13:22:27.672948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:21.217 [2024-12-06 13:22:27.672977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.784 ms 00:28:21.217 [2024-12-06 13:22:27.672991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.217 [2024-12-06 13:22:27.682171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.217 [2024-12-06 13:22:27.682275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:21.217 [2024-12-06 13:22:27.682299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.086 ms 00:28:21.217 [2024-12-06 13:22:27.682313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.217 [2024-12-06 13:22:27.682575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.217 [2024-12-06 13:22:27.682597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:21.217 [2024-12-06 13:22:27.682614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:28:21.218 [2024-12-06 13:22:27.682626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.218 [2024-12-06 13:22:27.696141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.218 [2024-12-06 13:22:27.696235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:21.218 [2024-12-06 13:22:27.696264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.451 ms 00:28:21.218 [2024-12-06 13:22:27.696279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.218 [2024-12-06 13:22:27.709616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.218 [2024-12-06 13:22:27.709709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:21.218 [2024-12-06 13:22:27.709745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.201 ms 00:28:21.218 [2024-12-06 13:22:27.709759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.218 [2024-12-06 13:22:27.722677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.218 [2024-12-06 13:22:27.722772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:21.218 [2024-12-06 13:22:27.722796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.794 ms 00:28:21.218 [2024-12-06 13:22:27.722809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.218 [2024-12-06 13:22:27.735754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.218 [2024-12-06 13:22:27.735835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:21.218 [2024-12-06 13:22:27.735872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.788 ms 00:28:21.218 [2024-12-06 13:22:27.735885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.218 [2024-12-06 13:22:27.735945] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:21.218 [2024-12-06 13:22:27.735970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.735988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.736987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.737001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.737013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.737032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.737047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.737067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:21.218 [2024-12-06 13:22:27.737081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:21.219 [2024-12-06 13:22:27.737543] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:21.219 [2024-12-06 13:22:27.737565] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9c5936a-1bb5-432f-b1c3-6cf254b3be43 00:28:21.219 [2024-12-06 13:22:27.737594] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:21.219 [2024-12-06 13:22:27.737609] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:21.219 [2024-12-06 13:22:27.737621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:21.219 [2024-12-06 13:22:27.737634] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:21.219 [2024-12-06 13:22:27.737646] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:21.219 [2024-12-06 13:22:27.737660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:21.219 [2024-12-06 13:22:27.737671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:21.219 [2024-12-06 13:22:27.737684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:21.219 [2024-12-06 13:22:27.737695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:21.219 [2024-12-06 13:22:27.737709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.219 [2024-12-06 13:22:27.737722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:21.219 [2024-12-06 13:22:27.737736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.769 ms 00:28:21.219 [2024-12-06 13:22:27.737748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.477 [2024-12-06 13:22:27.755037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.477 [2024-12-06 13:22:27.755115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:21.477 [2024-12-06 13:22:27.755156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.187 ms 00:28:21.477 [2024-12-06 13:22:27.755171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.477 [2024-12-06 13:22:27.755722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.477 [2024-12-06 13:22:27.755756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:21.477 [2024-12-06 13:22:27.755787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:28:21.477 [2024-12-06 13:22:27.755800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.477 [2024-12-06 13:22:27.815150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.477 [2024-12-06 13:22:27.815222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:21.477 [2024-12-06 13:22:27.815250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.477 [2024-12-06 13:22:27.815264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.477 [2024-12-06 13:22:27.815431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.477 [2024-12-06 13:22:27.815450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:21.477 [2024-12-06 13:22:27.815478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.477 [2024-12-06 13:22:27.815491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.477 [2024-12-06 13:22:27.815582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.477 [2024-12-06 13:22:27.815611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:21.477 [2024-12-06 13:22:27.815637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.477 [2024-12-06 13:22:27.815650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.477 [2024-12-06 13:22:27.815683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.477 [2024-12-06 13:22:27.815699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:21.477 [2024-12-06 13:22:27.815717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.477 [2024-12-06 13:22:27.815734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.477 [2024-12-06 13:22:27.930469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.477 [2024-12-06 13:22:27.930547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:21.477 [2024-12-06 13:22:27.930577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.477 [2024-12-06 13:22:27.930592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.751 [2024-12-06 13:22:28.025792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.751 [2024-12-06 13:22:28.025887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:21.751 [2024-12-06 13:22:28.025917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.751 [2024-12-06 13:22:28.025938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.751 [2024-12-06 13:22:28.026071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.751 [2024-12-06 13:22:28.026092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:21.751 [2024-12-06 13:22:28.026117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.751 [2024-12-06 13:22:28.026130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.752 [2024-12-06 13:22:28.026174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.752 [2024-12-06 13:22:28.026190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:21.752 [2024-12-06 13:22:28.026209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.752 [2024-12-06 13:22:28.026221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.752 [2024-12-06 13:22:28.026376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.752 [2024-12-06 13:22:28.026409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:21.752 [2024-12-06 13:22:28.026444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.752 [2024-12-06 13:22:28.026460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.752 [2024-12-06 13:22:28.026529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.752 [2024-12-06 13:22:28.026557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:21.752 [2024-12-06 13:22:28.026577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.752 [2024-12-06 13:22:28.026590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.752 [2024-12-06 13:22:28.026651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.752 [2024-12-06 13:22:28.026668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:21.752 [2024-12-06 13:22:28.026690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.752 [2024-12-06 13:22:28.026703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.752 [2024-12-06 13:22:28.026765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.752 [2024-12-06 13:22:28.026791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:21.752 [2024-12-06 13:22:28.026812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.752 [2024-12-06 13:22:28.026825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.752 [2024-12-06 13:22:28.027026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.512 ms, result 0 00:28:22.719 13:22:28 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:22.719 13:22:28 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:22.719 [2024-12-06 13:22:29.108003] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:28:22.719 [2024-12-06 13:22:29.108237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78996 ] 00:28:22.977 [2024-12-06 13:22:29.302448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.977 [2024-12-06 13:22:29.448595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.543 [2024-12-06 13:22:29.777446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:23.543 [2024-12-06 13:22:29.777561] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:23.543 [2024-12-06 13:22:29.941479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.543 [2024-12-06 13:22:29.941546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:23.543 [2024-12-06 13:22:29.941566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:23.543 [2024-12-06 13:22:29.941579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-06 13:22:29.945110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.543 [2024-12-06 13:22:29.945157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:23.543 [2024-12-06 13:22:29.945174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:28:23.543 [2024-12-06 13:22:29.945186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-06 13:22:29.945379] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:23.543 [2024-12-06 13:22:29.946359] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:23.543 [2024-12-06 13:22:29.946403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.543 [2024-12-06 13:22:29.946417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:23.543 [2024-12-06 13:22:29.946430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:28:23.543 [2024-12-06 13:22:29.946442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-06 13:22:29.947773] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:23.543 [2024-12-06 13:22:29.964396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.543 [2024-12-06 13:22:29.964462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:23.543 [2024-12-06 13:22:29.964483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.622 ms 00:28:23.543 [2024-12-06 13:22:29.964495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-06 13:22:29.964660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.543 [2024-12-06 13:22:29.964683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:23.543 [2024-12-06 13:22:29.964697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:23.543 [2024-12-06 13:22:29.964709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-06 13:22:29.969380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.543 [2024-12-06 13:22:29.969440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:23.543 [2024-12-06 13:22:29.969459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.607 ms 00:28:23.543 [2024-12-06 13:22:29.969471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-06 13:22:29.969634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.543 [2024-12-06 13:22:29.969657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:23.543 [2024-12-06 13:22:29.969671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:23.543 [2024-12-06 13:22:29.969683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-06 13:22:29.969728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.543 [2024-12-06 13:22:29.969744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:23.543 [2024-12-06 13:22:29.969756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:23.543 [2024-12-06 13:22:29.969767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-06 13:22:29.969799] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:23.543 [2024-12-06 13:22:29.974172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.543 [2024-12-06 13:22:29.974211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:23.543 [2024-12-06 13:22:29.974227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.381 ms 00:28:23.543 [2024-12-06 13:22:29.974239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-06 13:22:29.974316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.543 [2024-12-06 13:22:29.974336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:23.543 [2024-12-06 13:22:29.974349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:23.543 [2024-12-06 13:22:29.974360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-06 13:22:29.974400] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:23.543 [2024-12-06 13:22:29.974430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:23.543 [2024-12-06 13:22:29.974473] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:23.543 [2024-12-06 13:22:29.974493] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:23.543 [2024-12-06 13:22:29.974605] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:23.543 [2024-12-06 13:22:29.974628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:23.543 [2024-12-06 13:22:29.974644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:23.543 [2024-12-06 13:22:29.974664] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:23.543 [2024-12-06 13:22:29.974677] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:23.543 [2024-12-06 13:22:29.974690] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:23.543 [2024-12-06 13:22:29.974700] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:23.543 [2024-12-06 13:22:29.974711] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:23.543 [2024-12-06 13:22:29.974722] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:23.544 [2024-12-06 13:22:29.974734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.544 [2024-12-06 13:22:29.974746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:23.544 [2024-12-06 13:22:29.974758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:28:23.544 [2024-12-06 13:22:29.974769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.544 [2024-12-06 13:22:29.974914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.544 [2024-12-06 13:22:29.974939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:23.544 [2024-12-06 13:22:29.974952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:28:23.544 [2024-12-06 13:22:29.974963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.544 [2024-12-06 13:22:29.975081] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:23.544 [2024-12-06 13:22:29.975098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:23.544 [2024-12-06 13:22:29.975110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:23.544 [2024-12-06 13:22:29.975122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:23.544 [2024-12-06 13:22:29.975145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:23.544 [2024-12-06 13:22:29.975167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:23.544 [2024-12-06 13:22:29.975177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:23.544 [2024-12-06 13:22:29.975198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:23.544 [2024-12-06 13:22:29.975223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:23.544 [2024-12-06 13:22:29.975233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:23.544 [2024-12-06 13:22:29.975244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:23.544 [2024-12-06 13:22:29.975255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:23.544 [2024-12-06 13:22:29.975266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:23.544 [2024-12-06 13:22:29.975289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:23.544 [2024-12-06 13:22:29.975299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:23.544 [2024-12-06 13:22:29.975322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:23.544 [2024-12-06 13:22:29.975343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:23.544 [2024-12-06 13:22:29.975353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:23.544 [2024-12-06 13:22:29.975374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:23.544 [2024-12-06 13:22:29.975384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:23.544 [2024-12-06 13:22:29.975405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:23.544 [2024-12-06 13:22:29.975415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:23.544 [2024-12-06 13:22:29.975436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:23.544 [2024-12-06 13:22:29.975446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:23.544 [2024-12-06 13:22:29.975466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:23.544 [2024-12-06 13:22:29.975476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:23.544 [2024-12-06 13:22:29.975487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:23.544 [2024-12-06 13:22:29.975510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:23.544 [2024-12-06 13:22:29.975527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:23.544 [2024-12-06 13:22:29.975538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:23.544 [2024-12-06 13:22:29.975559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:23.544 [2024-12-06 13:22:29.975570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975580] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:23.544 [2024-12-06 13:22:29.975592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:23.544 [2024-12-06 13:22:29.975609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:23.544 [2024-12-06 13:22:29.975620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:23.544 [2024-12-06 13:22:29.975632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:23.544 [2024-12-06 13:22:29.975643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:23.544 [2024-12-06 13:22:29.975653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:23.544 [2024-12-06 13:22:29.975664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:23.544 [2024-12-06 13:22:29.975674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:23.544 [2024-12-06 13:22:29.975685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:23.544 [2024-12-06 13:22:29.975697] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:23.544 [2024-12-06 13:22:29.975711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:23.544 [2024-12-06 13:22:29.975724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:23.544 [2024-12-06 13:22:29.975737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:23.544 [2024-12-06 13:22:29.975748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:23.544 [2024-12-06 13:22:29.975760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:23.544 [2024-12-06 13:22:29.975771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:23.544 [2024-12-06 13:22:29.975782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:23.544 [2024-12-06 13:22:29.975794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:23.544 [2024-12-06 13:22:29.975806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:23.544 [2024-12-06 13:22:29.975817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:23.544 [2024-12-06 13:22:29.975829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:23.544 [2024-12-06 13:22:29.975856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:23.544 [2024-12-06 13:22:29.975871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:23.544 [2024-12-06 13:22:29.975883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:23.544 [2024-12-06 13:22:29.975895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:23.544 [2024-12-06 13:22:29.975906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:23.544 [2024-12-06 13:22:29.975919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:23.544 [2024-12-06 13:22:29.975931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:23.544 [2024-12-06 13:22:29.975942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:23.544 [2024-12-06 13:22:29.975954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:23.544 [2024-12-06 13:22:29.975966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:23.544 [2024-12-06 13:22:29.975978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.544 [2024-12-06 13:22:29.975995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:23.544 [2024-12-06 13:22:29.976008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:28:23.544 [2024-12-06 13:22:29.976019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.544 [2024-12-06 13:22:30.010069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.544 [2024-12-06 13:22:30.010140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:23.544 [2024-12-06 13:22:30.010161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.971 ms 00:28:23.544 [2024-12-06 13:22:30.010174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.544 [2024-12-06 13:22:30.010391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.544 [2024-12-06 13:22:30.010413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:23.544 [2024-12-06 13:22:30.010427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:23.544 [2024-12-06 13:22:30.010439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.069088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.069156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:23.803 [2024-12-06 13:22:30.069183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.614 ms 00:28:23.803 [2024-12-06 13:22:30.069195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.069367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.069388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:23.803 [2024-12-06 13:22:30.069402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:23.803 [2024-12-06 13:22:30.069413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.069773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.069799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:23.803 [2024-12-06 13:22:30.069822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:28:23.803 [2024-12-06 13:22:30.069833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.070040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.070067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:23.803 [2024-12-06 13:22:30.070080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:28:23.803 [2024-12-06 13:22:30.070091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.088085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.088152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:23.803 [2024-12-06 13:22:30.088175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.958 ms 00:28:23.803 [2024-12-06 13:22:30.088187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.105105] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:23.803 [2024-12-06 13:22:30.105173] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:23.803 [2024-12-06 13:22:30.105195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.105208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:23.803 [2024-12-06 13:22:30.105223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.800 ms 00:28:23.803 [2024-12-06 13:22:30.105235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.136100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.136184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:23.803 [2024-12-06 13:22:30.136207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.707 ms 00:28:23.803 [2024-12-06 13:22:30.136220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.152898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.152965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:23.803 [2024-12-06 13:22:30.152984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.483 ms 00:28:23.803 [2024-12-06 13:22:30.152996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.169036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.169098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:23.803 [2024-12-06 13:22:30.169118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.899 ms 00:28:23.803 [2024-12-06 13:22:30.169129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.170089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.170125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:23.803 [2024-12-06 13:22:30.170141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:28:23.803 [2024-12-06 13:22:30.170152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.246489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.246565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:23.803 [2024-12-06 13:22:30.246587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.299 ms 00:28:23.803 [2024-12-06 13:22:30.246599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.259709] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:23.803 [2024-12-06 13:22:30.274256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.803 [2024-12-06 13:22:30.274334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:23.803 [2024-12-06 13:22:30.274355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.482 ms 00:28:23.803 [2024-12-06 13:22:30.274377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.803 [2024-12-06 13:22:30.274549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.804 [2024-12-06 13:22:30.274570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:23.804 [2024-12-06 13:22:30.274584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:23.804 [2024-12-06 13:22:30.274596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.804 [2024-12-06 13:22:30.274665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.804 [2024-12-06 13:22:30.274682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:23.804 [2024-12-06 13:22:30.274695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:23.804 [2024-12-06 13:22:30.274711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.804 [2024-12-06 13:22:30.274757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.804 [2024-12-06 13:22:30.274775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:23.804 [2024-12-06 13:22:30.274788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:23.804 [2024-12-06 13:22:30.274799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.804 [2024-12-06 13:22:30.274873] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:23.804 [2024-12-06 13:22:30.274898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.804 [2024-12-06 13:22:30.274911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:23.804 [2024-12-06 13:22:30.274924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:23.804 [2024-12-06 13:22:30.274935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.804 [2024-12-06 13:22:30.307754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.804 [2024-12-06 13:22:30.307820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:23.804 [2024-12-06 13:22:30.307863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.779 ms 00:28:23.804 [2024-12-06 13:22:30.307892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.804 [2024-12-06 13:22:30.308085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.804 [2024-12-06 13:22:30.308107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:23.804 [2024-12-06 13:22:30.308121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:23.804 [2024-12-06 13:22:30.308133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.804 [2024-12-06 13:22:30.309137] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:23.804 [2024-12-06 13:22:30.313635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.321 ms, result 0 00:28:23.804 [2024-12-06 13:22:30.314500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:24.061 [2024-12-06 13:22:30.331396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:25.023  [2024-12-06T13:22:32.486Z] Copying: 27/256 [MB] (27 MBps) [2024-12-06T13:22:33.421Z] Copying: 52/256 [MB] (25 MBps) [2024-12-06T13:22:34.355Z] Copying: 76/256 [MB] (23 MBps) [2024-12-06T13:22:35.747Z] Copying: 103/256 [MB] (27 MBps) [2024-12-06T13:22:36.678Z] Copying: 127/256 [MB] (23 MBps) [2024-12-06T13:22:37.613Z] Copying: 148/256 [MB] (21 MBps) [2024-12-06T13:22:38.548Z] Copying: 172/256 [MB] (23 MBps) [2024-12-06T13:22:39.482Z] Copying: 196/256 [MB] (23 MBps) [2024-12-06T13:22:40.416Z] Copying: 218/256 [MB] (22 MBps) [2024-12-06T13:22:41.350Z] Copying: 239/256 [MB] (20 MBps) [2024-12-06T13:22:41.350Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-06 13:22:41.107880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:34.822 [2024-12-06 13:22:41.126098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.822 [2024-12-06 13:22:41.126190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:34.822 [2024-12-06 13:22:41.126261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:34.822 [2024-12-06 13:22:41.126285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.822 [2024-12-06 13:22:41.126339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:34.822 [2024-12-06 13:22:41.131183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.822 [2024-12-06 13:22:41.131247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:34.822 [2024-12-06 13:22:41.131276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.805 ms 00:28:34.822 [2024-12-06 13:22:41.131297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.822 [2024-12-06 13:22:41.131746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.822 [2024-12-06 13:22:41.131787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:34.822 [2024-12-06 13:22:41.131813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:28:34.822 [2024-12-06 13:22:41.131833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.822 [2024-12-06 13:22:41.136593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.822 [2024-12-06 13:22:41.136654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:34.822 [2024-12-06 13:22:41.136679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:28:34.822 [2024-12-06 13:22:41.136697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.822 [2024-12-06 13:22:41.146447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.822 [2024-12-06 13:22:41.146520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:34.822 [2024-12-06 13:22:41.146547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.661 ms 00:28:34.822 [2024-12-06 13:22:41.146568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.822 [2024-12-06 13:22:41.193253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.822 [2024-12-06 13:22:41.193390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:34.822 [2024-12-06 13:22:41.193427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.563 ms 00:28:34.822 [2024-12-06 13:22:41.193449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.822 [2024-12-06 13:22:41.217375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.823 [2024-12-06 13:22:41.217471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:34.823 [2024-12-06 13:22:41.217509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.734 ms 00:28:34.823 [2024-12-06 13:22:41.217522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.823 [2024-12-06 13:22:41.217755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.823 [2024-12-06 13:22:41.217783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:34.823 [2024-12-06 13:22:41.217812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:28:34.823 [2024-12-06 13:22:41.217830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.823 [2024-12-06 13:22:41.263745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.823 [2024-12-06 13:22:41.263855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:34.823 [2024-12-06 13:22:41.263890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.847 ms 00:28:34.823 [2024-12-06 13:22:41.263919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.823 [2024-12-06 13:22:41.309861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.823 [2024-12-06 13:22:41.309965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:34.823 [2024-12-06 13:22:41.310003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.795 ms 00:28:34.823 [2024-12-06 13:22:41.310026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-12-06 13:22:41.355283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-12-06 13:22:41.355385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:35.081 [2024-12-06 13:22:41.355425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.088 ms 00:28:35.081 [2024-12-06 13:22:41.355449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-12-06 13:22:41.401170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-12-06 13:22:41.401304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:35.081 [2024-12-06 13:22:41.401342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.356 ms 00:28:35.081 [2024-12-06 13:22:41.401364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-12-06 13:22:41.401486] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:35.081 [2024-12-06 13:22:41.401527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:35.081 [2024-12-06 13:22:41.401555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:35.081 [2024-12-06 13:22:41.401577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:35.081 [2024-12-06 13:22:41.401598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:35.081 [2024-12-06 13:22:41.401620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:35.081 [2024-12-06 13:22:41.401660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:35.081 [2024-12-06 13:22:41.401684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:35.081 [2024-12-06 13:22:41.401705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.401987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.402985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:35.082 [2024-12-06 13:22:41.403657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:35.083 [2024-12-06 13:22:41.403679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:35.083 [2024-12-06 13:22:41.403701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:35.083 [2024-12-06 13:22:41.403722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:35.083 [2024-12-06 13:22:41.403743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:35.083 [2024-12-06 13:22:41.403765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:35.083 [2024-12-06 13:22:41.403797] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:35.083 [2024-12-06 13:22:41.403829] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9c5936a-1bb5-432f-b1c3-6cf254b3be43 00:28:35.083 [2024-12-06 13:22:41.403879] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:35.083 [2024-12-06 13:22:41.403902] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:35.083 [2024-12-06 13:22:41.403922] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:35.083 [2024-12-06 13:22:41.403943] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:35.083 [2024-12-06 13:22:41.403962] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:35.083 [2024-12-06 13:22:41.403982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:35.083 [2024-12-06 13:22:41.404014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:35.083 [2024-12-06 13:22:41.404033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:35.083 [2024-12-06 13:22:41.404052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:35.083 [2024-12-06 13:22:41.404073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.083 [2024-12-06 13:22:41.404093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:35.083 [2024-12-06 13:22:41.404118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:28:35.083 [2024-12-06 13:22:41.404140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.083 [2024-12-06 13:22:41.428496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.083 [2024-12-06 13:22:41.428573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:35.083 [2024-12-06 13:22:41.428604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.302 ms 00:28:35.083 [2024-12-06 13:22:41.428625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.083 [2024-12-06 13:22:41.429420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.083 [2024-12-06 13:22:41.429473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:35.083 [2024-12-06 13:22:41.429499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:28:35.083 [2024-12-06 13:22:41.429518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.083 [2024-12-06 13:22:41.490766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.083 [2024-12-06 13:22:41.490833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.083 [2024-12-06 13:22:41.490868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.083 [2024-12-06 13:22:41.490897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.083 [2024-12-06 13:22:41.491082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.083 [2024-12-06 13:22:41.491112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.083 [2024-12-06 13:22:41.491128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.083 [2024-12-06 13:22:41.491139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.083 [2024-12-06 13:22:41.491216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.083 [2024-12-06 13:22:41.491234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.083 [2024-12-06 13:22:41.491247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.083 [2024-12-06 13:22:41.491258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.083 [2024-12-06 13:22:41.491300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.083 [2024-12-06 13:22:41.491314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.083 [2024-12-06 13:22:41.491326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.083 [2024-12-06 13:22:41.491337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.340 [2024-12-06 13:22:41.641292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.340 [2024-12-06 13:22:41.641395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.340 [2024-12-06 13:22:41.641428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.340 [2024-12-06 13:22:41.641449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.340 [2024-12-06 13:22:41.755520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.340 [2024-12-06 13:22:41.755615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.340 [2024-12-06 13:22:41.755645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.340 [2024-12-06 13:22:41.755664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.340 [2024-12-06 13:22:41.755797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.340 [2024-12-06 13:22:41.755816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:35.340 [2024-12-06 13:22:41.755828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.340 [2024-12-06 13:22:41.755871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.340 [2024-12-06 13:22:41.755933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.340 [2024-12-06 13:22:41.755965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:35.340 [2024-12-06 13:22:41.755985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.340 [2024-12-06 13:22:41.756003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.340 [2024-12-06 13:22:41.756204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.340 [2024-12-06 13:22:41.756236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:35.340 [2024-12-06 13:22:41.756259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.340 [2024-12-06 13:22:41.756279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.340 [2024-12-06 13:22:41.756360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.340 [2024-12-06 13:22:41.756387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:35.340 [2024-12-06 13:22:41.756417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.340 [2024-12-06 13:22:41.756433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.340 [2024-12-06 13:22:41.756502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.340 [2024-12-06 13:22:41.756526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.340 [2024-12-06 13:22:41.756543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.341 [2024-12-06 13:22:41.756560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.341 [2024-12-06 13:22:41.756635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.341 [2024-12-06 13:22:41.756669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.341 [2024-12-06 13:22:41.756690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.341 [2024-12-06 13:22:41.756708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.341 [2024-12-06 13:22:41.756965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 630.885 ms, result 0 00:28:36.274 00:28:36.274 00:28:36.274 13:22:42 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:28:36.274 13:22:42 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:37.243 13:22:43 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:37.243 [2024-12-06 13:22:43.766227] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:28:37.243 [2024-12-06 13:22:43.766424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79155 ] 00:28:37.500 [2024-12-06 13:22:43.953044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.757 [2024-12-06 13:22:44.055729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.015 [2024-12-06 13:22:44.393292] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:38.015 [2024-12-06 13:22:44.393397] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:38.273 [2024-12-06 13:22:44.555161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.273 [2024-12-06 13:22:44.555211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:38.273 [2024-12-06 13:22:44.555232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:38.273 [2024-12-06 13:22:44.555243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.273 [2024-12-06 13:22:44.558545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.273 [2024-12-06 13:22:44.558585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:38.273 [2024-12-06 13:22:44.558603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.272 ms 00:28:38.273 [2024-12-06 13:22:44.558615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.273 [2024-12-06 13:22:44.558740] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:38.273 [2024-12-06 13:22:44.559720] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:38.273 [2024-12-06 13:22:44.559759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.273 [2024-12-06 13:22:44.559773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:38.273 [2024-12-06 13:22:44.559786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:28:38.273 [2024-12-06 13:22:44.559798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.273 [2024-12-06 13:22:44.561114] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:38.273 [2024-12-06 13:22:44.577326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.273 [2024-12-06 13:22:44.577367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:38.273 [2024-12-06 13:22:44.577384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.213 ms 00:28:38.273 [2024-12-06 13:22:44.577397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.273 [2024-12-06 13:22:44.577533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.274 [2024-12-06 13:22:44.577556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:38.274 [2024-12-06 13:22:44.577570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:38.274 [2024-12-06 13:22:44.577581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.274 [2024-12-06 13:22:44.582007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.274 [2024-12-06 13:22:44.582054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:38.274 [2024-12-06 13:22:44.582071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.363 ms 00:28:38.274 [2024-12-06 13:22:44.582083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.274 [2024-12-06 13:22:44.582232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.274 [2024-12-06 13:22:44.582255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:38.274 [2024-12-06 13:22:44.582268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:38.274 [2024-12-06 13:22:44.582280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.274 [2024-12-06 13:22:44.582326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.274 [2024-12-06 13:22:44.582342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:38.274 [2024-12-06 13:22:44.582354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:38.274 [2024-12-06 13:22:44.582365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.274 [2024-12-06 13:22:44.582397] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:38.274 [2024-12-06 13:22:44.586671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.274 [2024-12-06 13:22:44.586705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:38.274 [2024-12-06 13:22:44.586720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:28:38.274 [2024-12-06 13:22:44.586732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.274 [2024-12-06 13:22:44.586806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.274 [2024-12-06 13:22:44.586826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:38.274 [2024-12-06 13:22:44.586859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:38.274 [2024-12-06 13:22:44.586876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.274 [2024-12-06 13:22:44.586918] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:38.274 [2024-12-06 13:22:44.586954] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:38.274 [2024-12-06 13:22:44.587009] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:38.274 [2024-12-06 13:22:44.587030] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:38.274 [2024-12-06 13:22:44.587143] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:38.274 [2024-12-06 13:22:44.587163] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:38.274 [2024-12-06 13:22:44.587179] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:38.274 [2024-12-06 13:22:44.587199] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:38.274 [2024-12-06 13:22:44.587213] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:38.274 [2024-12-06 13:22:44.587226] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:38.274 [2024-12-06 13:22:44.587237] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:38.274 [2024-12-06 13:22:44.587248] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:38.274 [2024-12-06 13:22:44.587258] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:38.274 [2024-12-06 13:22:44.587271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.274 [2024-12-06 13:22:44.587283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:38.274 [2024-12-06 13:22:44.587295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:28:38.274 [2024-12-06 13:22:44.587306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.274 [2024-12-06 13:22:44.587408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.274 [2024-12-06 13:22:44.587428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:38.274 [2024-12-06 13:22:44.587440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:38.274 [2024-12-06 13:22:44.587452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.274 [2024-12-06 13:22:44.587601] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:38.274 [2024-12-06 13:22:44.587626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:38.274 [2024-12-06 13:22:44.587640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:38.274 [2024-12-06 13:22:44.587658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.274 [2024-12-06 13:22:44.587671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:38.274 [2024-12-06 13:22:44.587683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:38.274 [2024-12-06 13:22:44.587695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:38.274 [2024-12-06 13:22:44.587706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:38.274 [2024-12-06 13:22:44.587717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:38.274 [2024-12-06 13:22:44.587728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:38.274 [2024-12-06 13:22:44.587739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:38.274 [2024-12-06 13:22:44.587763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:38.274 [2024-12-06 13:22:44.587774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:38.274 [2024-12-06 13:22:44.587785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:38.274 [2024-12-06 13:22:44.587796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:38.274 [2024-12-06 13:22:44.587807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.274 [2024-12-06 13:22:44.587818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:38.274 [2024-12-06 13:22:44.587828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:38.274 [2024-12-06 13:22:44.587853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.274 [2024-12-06 13:22:44.587868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:38.274 [2024-12-06 13:22:44.587879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:38.274 [2024-12-06 13:22:44.587890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.274 [2024-12-06 13:22:44.587901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:38.274 [2024-12-06 13:22:44.587912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:38.274 [2024-12-06 13:22:44.587923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.274 [2024-12-06 13:22:44.587934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:38.274 [2024-12-06 13:22:44.587945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:38.274 [2024-12-06 13:22:44.587956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.274 [2024-12-06 13:22:44.587966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:38.274 [2024-12-06 13:22:44.587977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:38.274 [2024-12-06 13:22:44.587987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.274 [2024-12-06 13:22:44.587999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:38.274 [2024-12-06 13:22:44.588009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:38.274 [2024-12-06 13:22:44.588020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:38.274 [2024-12-06 13:22:44.588031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:38.274 [2024-12-06 13:22:44.588042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:38.274 [2024-12-06 13:22:44.588053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:38.274 [2024-12-06 13:22:44.588065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:38.274 [2024-12-06 13:22:44.588076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:38.274 [2024-12-06 13:22:44.588087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.274 [2024-12-06 13:22:44.588097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:38.275 [2024-12-06 13:22:44.588109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:38.275 [2024-12-06 13:22:44.588119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.275 [2024-12-06 13:22:44.588130] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:38.275 [2024-12-06 13:22:44.588142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:38.275 [2024-12-06 13:22:44.588158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:38.275 [2024-12-06 13:22:44.588171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.275 [2024-12-06 13:22:44.588183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:38.275 [2024-12-06 13:22:44.588194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:38.275 [2024-12-06 13:22:44.588205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:38.275 [2024-12-06 13:22:44.588216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:38.275 [2024-12-06 13:22:44.588227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:38.275 [2024-12-06 13:22:44.588239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:38.275 [2024-12-06 13:22:44.588251] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:38.275 [2024-12-06 13:22:44.588265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:38.275 [2024-12-06 13:22:44.588279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:38.275 [2024-12-06 13:22:44.588291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:38.275 [2024-12-06 13:22:44.588303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:38.275 [2024-12-06 13:22:44.588315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:38.275 [2024-12-06 13:22:44.588326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:38.275 [2024-12-06 13:22:44.588338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:38.275 [2024-12-06 13:22:44.588350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:38.275 [2024-12-06 13:22:44.588361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:38.275 [2024-12-06 13:22:44.588373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:38.275 [2024-12-06 13:22:44.588384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:38.275 [2024-12-06 13:22:44.588396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:38.275 [2024-12-06 13:22:44.588407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:38.275 [2024-12-06 13:22:44.588420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:38.275 [2024-12-06 13:22:44.588432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:38.275 [2024-12-06 13:22:44.588444] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:38.275 [2024-12-06 13:22:44.588457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:38.275 [2024-12-06 13:22:44.588469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:38.275 [2024-12-06 13:22:44.588482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:38.275 [2024-12-06 13:22:44.588494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:38.275 [2024-12-06 13:22:44.588506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:38.275 [2024-12-06 13:22:44.588519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.588535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:38.275 [2024-12-06 13:22:44.588548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:28:38.275 [2024-12-06 13:22:44.588559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.621540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.621599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:38.275 [2024-12-06 13:22:44.621619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.903 ms 00:28:38.275 [2024-12-06 13:22:44.621655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.621866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.621889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:38.275 [2024-12-06 13:22:44.621903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:28:38.275 [2024-12-06 13:22:44.621915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.676079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.676158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:38.275 [2024-12-06 13:22:44.676199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.127 ms 00:28:38.275 [2024-12-06 13:22:44.676218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.676449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.676479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:38.275 [2024-12-06 13:22:44.676503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:38.275 [2024-12-06 13:22:44.676523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.676988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.677023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:38.275 [2024-12-06 13:22:44.677062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:28:38.275 [2024-12-06 13:22:44.677084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.677326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.677369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:38.275 [2024-12-06 13:22:44.677395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:28:38.275 [2024-12-06 13:22:44.677416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.694988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.695060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:38.275 [2024-12-06 13:22:44.695093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.520 ms 00:28:38.275 [2024-12-06 13:22:44.695128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.712016] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:38.275 [2024-12-06 13:22:44.712103] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:38.275 [2024-12-06 13:22:44.712136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.712156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:38.275 [2024-12-06 13:22:44.712178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.738 ms 00:28:38.275 [2024-12-06 13:22:44.712196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.742996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.743076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:38.275 [2024-12-06 13:22:44.743109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.560 ms 00:28:38.275 [2024-12-06 13:22:44.743129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.759574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.759648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:38.275 [2024-12-06 13:22:44.759679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.241 ms 00:28:38.275 [2024-12-06 13:22:44.759697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.775717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.775786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:38.275 [2024-12-06 13:22:44.775814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.805 ms 00:28:38.275 [2024-12-06 13:22:44.775834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.275 [2024-12-06 13:22:44.776910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.275 [2024-12-06 13:22:44.776962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:38.275 [2024-12-06 13:22:44.776988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:28:38.275 [2024-12-06 13:22:44.777011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.533 [2024-12-06 13:22:44.851660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.533 [2024-12-06 13:22:44.851739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:38.533 [2024-12-06 13:22:44.851772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.563 ms 00:28:38.533 [2024-12-06 13:22:44.851792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.533 [2024-12-06 13:22:44.864781] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:38.533 [2024-12-06 13:22:44.878923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.533 [2024-12-06 13:22:44.878999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:38.533 [2024-12-06 13:22:44.879031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.908 ms 00:28:38.533 [2024-12-06 13:22:44.879064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.533 [2024-12-06 13:22:44.879285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.533 [2024-12-06 13:22:44.879328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:38.533 [2024-12-06 13:22:44.879352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:38.533 [2024-12-06 13:22:44.879373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.533 [2024-12-06 13:22:44.879475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.533 [2024-12-06 13:22:44.879518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:38.533 [2024-12-06 13:22:44.879545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:38.533 [2024-12-06 13:22:44.879574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.533 [2024-12-06 13:22:44.879647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.533 [2024-12-06 13:22:44.879677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:38.533 [2024-12-06 13:22:44.879699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:38.533 [2024-12-06 13:22:44.879718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.533 [2024-12-06 13:22:44.879790] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:38.533 [2024-12-06 13:22:44.879819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.533 [2024-12-06 13:22:44.879838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:38.533 [2024-12-06 13:22:44.879902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:38.533 [2024-12-06 13:22:44.879927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.533 [2024-12-06 13:22:44.911498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.533 [2024-12-06 13:22:44.911566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:38.533 [2024-12-06 13:22:44.911595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.515 ms 00:28:38.533 [2024-12-06 13:22:44.911616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.533 [2024-12-06 13:22:44.911797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.533 [2024-12-06 13:22:44.911830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:38.533 [2024-12-06 13:22:44.911897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:38.533 [2024-12-06 13:22:44.911922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.533 [2024-12-06 13:22:44.912975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:38.533 [2024-12-06 13:22:44.917225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 357.463 ms, result 0 00:28:38.533 [2024-12-06 13:22:44.918098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:38.533 [2024-12-06 13:22:44.934932] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:38.793  [2024-12-06T13:22:45.321Z] Copying: 4096/4096 [kB] (average 27 MBps)[2024-12-06 13:22:45.084896] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:38.793 [2024-12-06 13:22:45.097366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.097570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:38.793 [2024-12-06 13:22:45.097622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:38.793 [2024-12-06 13:22:45.097643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.097695] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:38.793 [2024-12-06 13:22:45.101255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.101395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:38.793 [2024-12-06 13:22:45.101559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.525 ms 00:28:38.793 [2024-12-06 13:22:45.101641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.103331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.103483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:38.793 [2024-12-06 13:22:45.103658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:28:38.793 [2024-12-06 13:22:45.103741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.107866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.108024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:38.793 [2024-12-06 13:22:45.108178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.002 ms 00:28:38.793 [2024-12-06 13:22:45.108339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.116072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.116118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:38.793 [2024-12-06 13:22:45.116143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.534 ms 00:28:38.793 [2024-12-06 13:22:45.116165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.147735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.147977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:38.793 [2024-12-06 13:22:45.148017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.481 ms 00:28:38.793 [2024-12-06 13:22:45.148038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.165995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.166070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:38.793 [2024-12-06 13:22:45.166102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.854 ms 00:28:38.793 [2024-12-06 13:22:45.166122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.166383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.166417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:38.793 [2024-12-06 13:22:45.166458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:28:38.793 [2024-12-06 13:22:45.166479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.198513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.198579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:38.793 [2024-12-06 13:22:45.198611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.994 ms 00:28:38.793 [2024-12-06 13:22:45.198630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.230126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.230351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:38.793 [2024-12-06 13:22:45.230396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.376 ms 00:28:38.793 [2024-12-06 13:22:45.230418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.261510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.261726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:38.793 [2024-12-06 13:22:45.261756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.984 ms 00:28:38.793 [2024-12-06 13:22:45.261769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.292676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.793 [2024-12-06 13:22:45.292932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:38.793 [2024-12-06 13:22:45.292968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.753 ms 00:28:38.793 [2024-12-06 13:22:45.292983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.793 [2024-12-06 13:22:45.293105] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:38.793 [2024-12-06 13:22:45.293132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:38.793 [2024-12-06 13:22:45.293425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.293984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:38.794 [2024-12-06 13:22:45.294949] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:38.794 [2024-12-06 13:22:45.294970] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9c5936a-1bb5-432f-b1c3-6cf254b3be43 00:28:38.794 [2024-12-06 13:22:45.294990] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:38.794 [2024-12-06 13:22:45.295009] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:38.794 [2024-12-06 13:22:45.295027] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:38.794 [2024-12-06 13:22:45.295046] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:38.794 [2024-12-06 13:22:45.295064] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:38.794 [2024-12-06 13:22:45.295084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:38.794 [2024-12-06 13:22:45.295119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:38.794 [2024-12-06 13:22:45.295138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:38.794 [2024-12-06 13:22:45.295156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:38.794 [2024-12-06 13:22:45.295175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.794 [2024-12-06 13:22:45.295195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:38.794 [2024-12-06 13:22:45.295215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.071 ms 00:28:38.794 [2024-12-06 13:22:45.295233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.794 [2024-12-06 13:22:45.313464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.794 [2024-12-06 13:22:45.313523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:38.794 [2024-12-06 13:22:45.313543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.187 ms 00:28:38.795 [2024-12-06 13:22:45.313556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.795 [2024-12-06 13:22:45.314154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.795 [2024-12-06 13:22:45.314193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:38.795 [2024-12-06 13:22:45.314210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:28:38.795 [2024-12-06 13:22:45.314222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.361270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.361346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:39.053 [2024-12-06 13:22:45.361367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.361386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.361505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.361523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:39.053 [2024-12-06 13:22:45.361536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.361548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.361616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.361636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:39.053 [2024-12-06 13:22:45.361649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.361661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.361694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.361708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:39.053 [2024-12-06 13:22:45.361720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.361731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.465966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.466033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:39.053 [2024-12-06 13:22:45.466054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.466074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.551154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.551225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:39.053 [2024-12-06 13:22:45.551245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.551259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.551346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.551365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:39.053 [2024-12-06 13:22:45.551377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.551388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.551424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.551446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:39.053 [2024-12-06 13:22:45.551459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.551470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.551614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.551635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:39.053 [2024-12-06 13:22:45.551649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.551661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.551713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.551732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:39.053 [2024-12-06 13:22:45.551752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.551763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.551810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.551827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:39.053 [2024-12-06 13:22:45.551871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.551898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.551979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.053 [2024-12-06 13:22:45.552006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:39.053 [2024-12-06 13:22:45.552020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.053 [2024-12-06 13:22:45.552031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.053 [2024-12-06 13:22:45.552198] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 454.832 ms, result 0 00:28:40.427 00:28:40.427 00:28:40.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.427 13:22:46 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79190 00:28:40.427 13:22:46 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:28:40.427 13:22:46 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79190 00:28:40.427 13:22:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79190 ']' 00:28:40.427 13:22:46 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.427 13:22:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.427 13:22:46 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.427 13:22:46 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.427 13:22:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:40.427 [2024-12-06 13:22:46.766053] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:28:40.427 [2024-12-06 13:22:46.766224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79190 ] 00:28:40.685 [2024-12-06 13:22:46.953759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.685 [2024-12-06 13:22:47.108985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.621 13:22:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.621 13:22:47 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:28:41.621 13:22:47 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:28:41.879 [2024-12-06 13:22:48.242519] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:41.879 [2024-12-06 13:22:48.242609] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:42.139 [2024-12-06 13:22:48.423539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.139 [2024-12-06 13:22:48.423613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:42.139 [2024-12-06 13:22:48.423642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:42.139 [2024-12-06 13:22:48.423657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.139 [2024-12-06 13:22:48.427628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.139 [2024-12-06 13:22:48.427677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:42.139 [2024-12-06 13:22:48.427699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.941 ms 00:28:42.139 [2024-12-06 13:22:48.427712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.139 [2024-12-06 13:22:48.427872] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:42.139 [2024-12-06 13:22:48.428816] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:42.139 [2024-12-06 13:22:48.428873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.139 [2024-12-06 13:22:48.428890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:42.139 [2024-12-06 13:22:48.428905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:28:42.139 [2024-12-06 13:22:48.428917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.139 [2024-12-06 13:22:48.430259] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:42.139 [2024-12-06 13:22:48.447005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.139 [2024-12-06 13:22:48.447070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:42.139 [2024-12-06 13:22:48.447093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.751 ms 00:28:42.139 [2024-12-06 13:22:48.447108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.139 [2024-12-06 13:22:48.447242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.139 [2024-12-06 13:22:48.447275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:42.139 [2024-12-06 13:22:48.447292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:42.139 [2024-12-06 13:22:48.447310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.139 [2024-12-06 13:22:48.451893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.139 [2024-12-06 13:22:48.451970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:42.139 [2024-12-06 13:22:48.451991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.503 ms 00:28:42.139 [2024-12-06 13:22:48.452010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.139 [2024-12-06 13:22:48.452210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.139 [2024-12-06 13:22:48.452240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:42.139 [2024-12-06 13:22:48.452256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:28:42.139 [2024-12-06 13:22:48.452285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.139 [2024-12-06 13:22:48.452327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.139 [2024-12-06 13:22:48.452348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:42.139 [2024-12-06 13:22:48.452361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:42.139 [2024-12-06 13:22:48.452375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.139 [2024-12-06 13:22:48.452410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:42.139 [2024-12-06 13:22:48.456756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.140 [2024-12-06 13:22:48.456799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:42.140 [2024-12-06 13:22:48.456823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.349 ms 00:28:42.140 [2024-12-06 13:22:48.456837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.140 [2024-12-06 13:22:48.456943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.140 [2024-12-06 13:22:48.456963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:42.140 [2024-12-06 13:22:48.456982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:42.140 [2024-12-06 13:22:48.457001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.140 [2024-12-06 13:22:48.457039] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:42.140 [2024-12-06 13:22:48.457074] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:42.140 [2024-12-06 13:22:48.457139] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:42.140 [2024-12-06 13:22:48.457165] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:42.140 [2024-12-06 13:22:48.457292] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:42.140 [2024-12-06 13:22:48.457310] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:42.140 [2024-12-06 13:22:48.457339] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:42.140 [2024-12-06 13:22:48.457356] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:42.140 [2024-12-06 13:22:48.457377] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:42.140 [2024-12-06 13:22:48.457391] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:42.140 [2024-12-06 13:22:48.457408] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:42.140 [2024-12-06 13:22:48.457421] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:42.140 [2024-12-06 13:22:48.457442] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:42.140 [2024-12-06 13:22:48.457456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.140 [2024-12-06 13:22:48.457473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:42.140 [2024-12-06 13:22:48.457487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:28:42.140 [2024-12-06 13:22:48.457505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.140 [2024-12-06 13:22:48.457639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.140 [2024-12-06 13:22:48.457665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:42.140 [2024-12-06 13:22:48.457681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:42.140 [2024-12-06 13:22:48.457697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.140 [2024-12-06 13:22:48.457817] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:42.140 [2024-12-06 13:22:48.457854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:42.140 [2024-12-06 13:22:48.457872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:42.140 [2024-12-06 13:22:48.457891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.140 [2024-12-06 13:22:48.457905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:42.140 [2024-12-06 13:22:48.457924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:42.140 [2024-12-06 13:22:48.457937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:42.140 [2024-12-06 13:22:48.457958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:42.140 [2024-12-06 13:22:48.457971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:42.140 [2024-12-06 13:22:48.457989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:42.140 [2024-12-06 13:22:48.458002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:42.140 [2024-12-06 13:22:48.458018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:42.140 [2024-12-06 13:22:48.458030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:42.140 [2024-12-06 13:22:48.458047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:42.140 [2024-12-06 13:22:48.458060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:42.140 [2024-12-06 13:22:48.458076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.140 [2024-12-06 13:22:48.458089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:42.140 [2024-12-06 13:22:48.458106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:42.140 [2024-12-06 13:22:48.458132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.140 [2024-12-06 13:22:48.458151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:42.140 [2024-12-06 13:22:48.458164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:42.140 [2024-12-06 13:22:48.458181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:42.140 [2024-12-06 13:22:48.458193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:42.140 [2024-12-06 13:22:48.458214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:42.140 [2024-12-06 13:22:48.458226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:42.140 [2024-12-06 13:22:48.458243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:42.140 [2024-12-06 13:22:48.458255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:42.140 [2024-12-06 13:22:48.458271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:42.140 [2024-12-06 13:22:48.458284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:42.140 [2024-12-06 13:22:48.458302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:42.140 [2024-12-06 13:22:48.458314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:42.140 [2024-12-06 13:22:48.458331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:42.140 [2024-12-06 13:22:48.458344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:42.140 [2024-12-06 13:22:48.458360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:42.140 [2024-12-06 13:22:48.458373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:42.140 [2024-12-06 13:22:48.458390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:42.140 [2024-12-06 13:22:48.458402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:42.140 [2024-12-06 13:22:48.458419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:42.140 [2024-12-06 13:22:48.458431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:42.140 [2024-12-06 13:22:48.458452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.140 [2024-12-06 13:22:48.458464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:42.140 [2024-12-06 13:22:48.458481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:42.140 [2024-12-06 13:22:48.458493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.140 [2024-12-06 13:22:48.458507] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:42.140 [2024-12-06 13:22:48.458527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:42.140 [2024-12-06 13:22:48.458541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:42.140 [2024-12-06 13:22:48.458553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.140 [2024-12-06 13:22:48.458567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:42.140 [2024-12-06 13:22:48.458578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:42.140 [2024-12-06 13:22:48.458591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:42.140 [2024-12-06 13:22:48.458603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:42.140 [2024-12-06 13:22:48.458618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:42.140 [2024-12-06 13:22:48.458630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:42.140 [2024-12-06 13:22:48.458644] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:42.140 [2024-12-06 13:22:48.458658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:42.140 [2024-12-06 13:22:48.458682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:42.140 [2024-12-06 13:22:48.458696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:42.140 [2024-12-06 13:22:48.458714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:42.140 [2024-12-06 13:22:48.458727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:42.140 [2024-12-06 13:22:48.458745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:42.140 [2024-12-06 13:22:48.458758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:42.140 [2024-12-06 13:22:48.458774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:42.140 [2024-12-06 13:22:48.458787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:42.140 [2024-12-06 13:22:48.458805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:42.140 [2024-12-06 13:22:48.458818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:42.140 [2024-12-06 13:22:48.458835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:42.140 [2024-12-06 13:22:48.458861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:42.140 [2024-12-06 13:22:48.458880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:42.140 [2024-12-06 13:22:48.458895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:42.141 [2024-12-06 13:22:48.458912] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:42.141 [2024-12-06 13:22:48.458927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:42.141 [2024-12-06 13:22:48.458945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:42.141 [2024-12-06 13:22:48.458957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:42.141 [2024-12-06 13:22:48.458982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:42.141 [2024-12-06 13:22:48.458994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:42.141 [2024-12-06 13:22:48.459009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.459022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:42.141 [2024-12-06 13:22:48.459036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:28:42.141 [2024-12-06 13:22:48.459050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.141 [2024-12-06 13:22:48.493590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.493863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:42.141 [2024-12-06 13:22:48.493911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.454 ms 00:28:42.141 [2024-12-06 13:22:48.493933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.141 [2024-12-06 13:22:48.494138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.494161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:42.141 [2024-12-06 13:22:48.494182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:42.141 [2024-12-06 13:22:48.494196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.141 [2024-12-06 13:22:48.538618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.538704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:42.141 [2024-12-06 13:22:48.538733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.377 ms 00:28:42.141 [2024-12-06 13:22:48.538748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.141 [2024-12-06 13:22:48.538939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.538962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:42.141 [2024-12-06 13:22:48.538983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:42.141 [2024-12-06 13:22:48.538997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.141 [2024-12-06 13:22:48.539346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.539389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:42.141 [2024-12-06 13:22:48.539417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:28:42.141 [2024-12-06 13:22:48.539432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.141 [2024-12-06 13:22:48.539634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.539656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:42.141 [2024-12-06 13:22:48.539675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:28:42.141 [2024-12-06 13:22:48.539688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.141 [2024-12-06 13:22:48.560178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.560254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:42.141 [2024-12-06 13:22:48.560280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.450 ms 00:28:42.141 [2024-12-06 13:22:48.560294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.141 [2024-12-06 13:22:48.595674] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:42.141 [2024-12-06 13:22:48.595766] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:42.141 [2024-12-06 13:22:48.595799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.595816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:42.141 [2024-12-06 13:22:48.595866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.318 ms 00:28:42.141 [2024-12-06 13:22:48.595902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.141 [2024-12-06 13:22:48.632793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.632889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:42.141 [2024-12-06 13:22:48.632933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.740 ms 00:28:42.141 [2024-12-06 13:22:48.632951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.141 [2024-12-06 13:22:48.652764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.141 [2024-12-06 13:22:48.653019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:42.141 [2024-12-06 13:22:48.653068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.623 ms 00:28:42.141 [2024-12-06 13:22:48.653086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.399 [2024-12-06 13:22:48.673319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.399 [2024-12-06 13:22:48.673390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:42.399 [2024-12-06 13:22:48.673424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.038 ms 00:28:42.399 [2024-12-06 13:22:48.673441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.399 [2024-12-06 13:22:48.674567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.400 [2024-12-06 13:22:48.674623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:42.400 [2024-12-06 13:22:48.674655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:28:42.400 [2024-12-06 13:22:48.674680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.400 [2024-12-06 13:22:48.768976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.400 [2024-12-06 13:22:48.769064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:42.400 [2024-12-06 13:22:48.769094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.234 ms 00:28:42.400 [2024-12-06 13:22:48.769111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.400 [2024-12-06 13:22:48.785189] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:42.400 [2024-12-06 13:22:48.802068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.400 [2024-12-06 13:22:48.802190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:42.400 [2024-12-06 13:22:48.802222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.748 ms 00:28:42.400 [2024-12-06 13:22:48.802253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.400 [2024-12-06 13:22:48.802417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.400 [2024-12-06 13:22:48.802447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:42.400 [2024-12-06 13:22:48.802465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:42.400 [2024-12-06 13:22:48.802482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.400 [2024-12-06 13:22:48.802559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.400 [2024-12-06 13:22:48.802582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:42.400 [2024-12-06 13:22:48.802598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:42.400 [2024-12-06 13:22:48.802619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.400 [2024-12-06 13:22:48.802657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.400 [2024-12-06 13:22:48.802679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:42.400 [2024-12-06 13:22:48.802695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:42.400 [2024-12-06 13:22:48.802711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.400 [2024-12-06 13:22:48.802771] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:42.400 [2024-12-06 13:22:48.802797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.400 [2024-12-06 13:22:48.802816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:42.400 [2024-12-06 13:22:48.802833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:42.400 [2024-12-06 13:22:48.802881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.400 [2024-12-06 13:22:48.842304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.400 [2024-12-06 13:22:48.842394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:42.400 [2024-12-06 13:22:48.842429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.323 ms 00:28:42.400 [2024-12-06 13:22:48.842447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.400 [2024-12-06 13:22:48.842682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.400 [2024-12-06 13:22:48.842720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:42.400 [2024-12-06 13:22:48.842752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:42.400 [2024-12-06 13:22:48.842776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.400 [2024-12-06 13:22:48.844305] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:42.400 [2024-12-06 13:22:48.850605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.220 ms, result 0 00:28:42.400 [2024-12-06 13:22:48.851887] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:42.400 Some configs were skipped because the RPC state that can call them passed over. 00:28:42.400 13:22:48 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:28:42.967 [2024-12-06 13:22:49.197942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.967 [2024-12-06 13:22:49.198220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:42.967 [2024-12-06 13:22:49.198372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.341 ms 00:28:42.967 [2024-12-06 13:22:49.198517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.967 [2024-12-06 13:22:49.198632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.036 ms, result 0 00:28:42.967 true 00:28:42.967 13:22:49 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:28:43.225 [2024-12-06 13:22:49.598124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.225 [2024-12-06 13:22:49.598384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:43.225 [2024-12-06 13:22:49.598554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:28:43.225 [2024-12-06 13:22:49.598693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.225 [2024-12-06 13:22:49.598803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.650 ms, result 0 00:28:43.225 true 00:28:43.225 13:22:49 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79190 00:28:43.225 13:22:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79190 ']' 00:28:43.225 13:22:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79190 00:28:43.225 13:22:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:28:43.225 13:22:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.225 13:22:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79190 00:28:43.225 killing process with pid 79190 00:28:43.225 13:22:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.225 13:22:49 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.225 13:22:49 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79190' 00:28:43.225 13:22:49 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79190 00:28:43.225 13:22:49 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79190 00:28:44.602 [2024-12-06 13:22:50.691322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.603 [2024-12-06 13:22:50.691392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:44.603 [2024-12-06 13:22:50.691414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:44.603 [2024-12-06 13:22:50.691430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.603 [2024-12-06 13:22:50.691486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:44.603 [2024-12-06 13:22:50.694909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.603 [2024-12-06 13:22:50.694951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:44.603 [2024-12-06 13:22:50.694973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:28:44.603 [2024-12-06 13:22:50.694985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.603 [2024-12-06 13:22:50.695285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.603 [2024-12-06 13:22:50.695320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:44.603 [2024-12-06 13:22:50.695337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:28:44.603 [2024-12-06 13:22:50.695349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.603 [2024-12-06 13:22:50.699470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.603 [2024-12-06 13:22:50.699529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:44.603 [2024-12-06 13:22:50.699556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.088 ms 00:28:44.603 [2024-12-06 13:22:50.699569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.603 [2024-12-06 13:22:50.707341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.603 [2024-12-06 13:22:50.707389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:44.603 [2024-12-06 13:22:50.707413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.714 ms 00:28:44.603 [2024-12-06 13:22:50.707426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.603 [2024-12-06 13:22:50.719959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.603 [2024-12-06 13:22:50.720013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:44.603 [2024-12-06 13:22:50.720039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.446 ms 00:28:44.603 [2024-12-06 13:22:50.720052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.604 [2024-12-06 13:22:50.728345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.604 [2024-12-06 13:22:50.728397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:44.604 [2024-12-06 13:22:50.728419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.237 ms 00:28:44.604 [2024-12-06 13:22:50.728432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.604 [2024-12-06 13:22:50.728599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.604 [2024-12-06 13:22:50.728620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:44.604 [2024-12-06 13:22:50.728636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:28:44.604 [2024-12-06 13:22:50.728649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.604 [2024-12-06 13:22:50.741452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.604 [2024-12-06 13:22:50.741537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:44.604 [2024-12-06 13:22:50.741560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.759 ms 00:28:44.604 [2024-12-06 13:22:50.741573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.604 [2024-12-06 13:22:50.754467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.604 [2024-12-06 13:22:50.754571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:44.604 [2024-12-06 13:22:50.754635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.813 ms 00:28:44.604 [2024-12-06 13:22:50.754655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.604 [2024-12-06 13:22:50.767200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.604 [2024-12-06 13:22:50.767265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:44.605 [2024-12-06 13:22:50.767293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.403 ms 00:28:44.605 [2024-12-06 13:22:50.767307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.605 [2024-12-06 13:22:50.779760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.605 [2024-12-06 13:22:50.779848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:44.605 [2024-12-06 13:22:50.779879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.324 ms 00:28:44.605 [2024-12-06 13:22:50.779894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.605 [2024-12-06 13:22:50.779965] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:44.605 [2024-12-06 13:22:50.779992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:44.605 [2024-12-06 13:22:50.780015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:44.605 [2024-12-06 13:22:50.780030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:44.605 [2024-12-06 13:22:50.780049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:44.605 [2024-12-06 13:22:50.780063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:44.605 [2024-12-06 13:22:50.780088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:44.605 [2024-12-06 13:22:50.780103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:44.605 [2024-12-06 13:22:50.780122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:44.605 [2024-12-06 13:22:50.780137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:44.605 [2024-12-06 13:22:50.780156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:44.605 [2024-12-06 13:22:50.780170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:44.606 [2024-12-06 13:22:50.780377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:44.607 [2024-12-06 13:22:50.780680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.780989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.781003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.781022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.781037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.781056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.781070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.781088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.781102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:44.608 [2024-12-06 13:22:50.781120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:44.609 [2024-12-06 13:22:50.781440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:44.610 [2024-12-06 13:22:50.781736] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:44.610 [2024-12-06 13:22:50.781771] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9c5936a-1bb5-432f-b1c3-6cf254b3be43 00:28:44.610 [2024-12-06 13:22:50.781791] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:44.610 [2024-12-06 13:22:50.781809] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:44.610 [2024-12-06 13:22:50.781822] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:44.610 [2024-12-06 13:22:50.781851] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:44.610 [2024-12-06 13:22:50.781867] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:44.610 [2024-12-06 13:22:50.781885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:44.610 [2024-12-06 13:22:50.781898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:44.611 [2024-12-06 13:22:50.781914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:44.611 [2024-12-06 13:22:50.781926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:44.611 [2024-12-06 13:22:50.781957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.611 [2024-12-06 13:22:50.781971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:44.611 [2024-12-06 13:22:50.781991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:28:44.611 [2024-12-06 13:22:50.782005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.611 [2024-12-06 13:22:50.799358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.611 [2024-12-06 13:22:50.799439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:44.611 [2024-12-06 13:22:50.799475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.255 ms 00:28:44.611 [2024-12-06 13:22:50.799491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.611 [2024-12-06 13:22:50.800110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.611 [2024-12-06 13:22:50.800141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:44.611 [2024-12-06 13:22:50.800172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:28:44.611 [2024-12-06 13:22:50.800186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.611 [2024-12-06 13:22:50.861521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.611 [2024-12-06 13:22:50.861604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:44.611 [2024-12-06 13:22:50.861634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.611 [2024-12-06 13:22:50.861649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.611 [2024-12-06 13:22:50.861820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.611 [2024-12-06 13:22:50.861854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:44.611 [2024-12-06 13:22:50.861886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.611 [2024-12-06 13:22:50.861899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.611 [2024-12-06 13:22:50.861993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.611 [2024-12-06 13:22:50.862014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:44.611 [2024-12-06 13:22:50.862038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.611 [2024-12-06 13:22:50.862051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.611 [2024-12-06 13:22:50.862086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.611 [2024-12-06 13:22:50.862101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:44.611 [2024-12-06 13:22:50.862119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.612 [2024-12-06 13:22:50.862137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.612 [2024-12-06 13:22:50.969025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.612 [2024-12-06 13:22:50.969103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:44.612 [2024-12-06 13:22:50.969127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.612 [2024-12-06 13:22:50.969140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.612 [2024-12-06 13:22:51.063034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.612 [2024-12-06 13:22:51.063137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:44.612 [2024-12-06 13:22:51.063186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.612 [2024-12-06 13:22:51.063222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.612 [2024-12-06 13:22:51.063372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.612 [2024-12-06 13:22:51.063397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:44.612 [2024-12-06 13:22:51.063424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.612 [2024-12-06 13:22:51.063439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.612 [2024-12-06 13:22:51.063487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.612 [2024-12-06 13:22:51.063515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:44.612 [2024-12-06 13:22:51.063551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.612 [2024-12-06 13:22:51.063571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.612 [2024-12-06 13:22:51.063761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.612 [2024-12-06 13:22:51.063783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:44.612 [2024-12-06 13:22:51.063805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.612 [2024-12-06 13:22:51.063819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.612 [2024-12-06 13:22:51.063932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.612 [2024-12-06 13:22:51.063964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:44.612 [2024-12-06 13:22:51.063988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.613 [2024-12-06 13:22:51.064004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.613 [2024-12-06 13:22:51.064091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.613 [2024-12-06 13:22:51.064112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:44.613 [2024-12-06 13:22:51.064135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.613 [2024-12-06 13:22:51.064149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.613 [2024-12-06 13:22:51.064254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.613 [2024-12-06 13:22:51.064285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:44.613 [2024-12-06 13:22:51.064307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.613 [2024-12-06 13:22:51.064321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.613 [2024-12-06 13:22:51.064511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.153 ms, result 0 00:28:45.561 13:22:52 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:45.820 [2024-12-06 13:22:52.152329] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:28:45.820 [2024-12-06 13:22:52.152494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79255 ] 00:28:45.820 [2024-12-06 13:22:52.327196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.130 [2024-12-06 13:22:52.432026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.387 [2024-12-06 13:22:52.782128] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:46.387 [2024-12-06 13:22:52.782219] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:46.646 [2024-12-06 13:22:52.948261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.646 [2024-12-06 13:22:52.948489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:46.646 [2024-12-06 13:22:52.948587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:46.646 [2024-12-06 13:22:52.948666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.952169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.952350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:46.647 [2024-12-06 13:22:52.952453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.407 ms 00:28:46.647 [2024-12-06 13:22:52.952576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.952942] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:46.647 [2024-12-06 13:22:52.954349] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:46.647 [2024-12-06 13:22:52.954492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.954593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:46.647 [2024-12-06 13:22:52.954693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.566 ms 00:28:46.647 [2024-12-06 13:22:52.954796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.956225] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:46.647 [2024-12-06 13:22:52.973516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.973683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:46.647 [2024-12-06 13:22:52.973803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.292 ms 00:28:46.647 [2024-12-06 13:22:52.973966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.974181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.974284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:46.647 [2024-12-06 13:22:52.974368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:46.647 [2024-12-06 13:22:52.974453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.979337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.979542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:46.647 [2024-12-06 13:22:52.979622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.757 ms 00:28:46.647 [2024-12-06 13:22:52.979706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.979972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.980060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:46.647 [2024-12-06 13:22:52.980134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:28:46.647 [2024-12-06 13:22:52.980199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.980300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.980369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:46.647 [2024-12-06 13:22:52.980448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:46.647 [2024-12-06 13:22:52.980523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.980626] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:46.647 [2024-12-06 13:22:52.985219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.985361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:46.647 [2024-12-06 13:22:52.985471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.604 ms 00:28:46.647 [2024-12-06 13:22:52.985555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.985716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.985796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:46.647 [2024-12-06 13:22:52.985818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:46.647 [2024-12-06 13:22:52.985830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.985910] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:46.647 [2024-12-06 13:22:52.985943] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:46.647 [2024-12-06 13:22:52.985986] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:46.647 [2024-12-06 13:22:52.986006] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:46.647 [2024-12-06 13:22:52.986119] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:46.647 [2024-12-06 13:22:52.986135] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:46.647 [2024-12-06 13:22:52.986150] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:46.647 [2024-12-06 13:22:52.986170] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:46.647 [2024-12-06 13:22:52.986184] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:46.647 [2024-12-06 13:22:52.986196] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:46.647 [2024-12-06 13:22:52.986207] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:46.647 [2024-12-06 13:22:52.986218] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:46.647 [2024-12-06 13:22:52.986228] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:46.647 [2024-12-06 13:22:52.986241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.986252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:46.647 [2024-12-06 13:22:52.986263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:28:46.647 [2024-12-06 13:22:52.986275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.986377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.647 [2024-12-06 13:22:52.986398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:46.647 [2024-12-06 13:22:52.986411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:46.647 [2024-12-06 13:22:52.986421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.647 [2024-12-06 13:22:52.986545] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:46.647 [2024-12-06 13:22:52.986567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:46.647 [2024-12-06 13:22:52.986580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:46.647 [2024-12-06 13:22:52.986592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.647 [2024-12-06 13:22:52.986603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:46.647 [2024-12-06 13:22:52.986613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:46.647 [2024-12-06 13:22:52.986624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:46.647 [2024-12-06 13:22:52.986635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:46.647 [2024-12-06 13:22:52.986645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:46.647 [2024-12-06 13:22:52.986655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:46.647 [2024-12-06 13:22:52.986665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:46.647 [2024-12-06 13:22:52.986690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:46.647 [2024-12-06 13:22:52.986701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:46.647 [2024-12-06 13:22:52.986712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:46.647 [2024-12-06 13:22:52.986723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:46.647 [2024-12-06 13:22:52.986733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.647 [2024-12-06 13:22:52.986743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:46.647 [2024-12-06 13:22:52.986753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:46.647 [2024-12-06 13:22:52.986764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.647 [2024-12-06 13:22:52.986774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:46.647 [2024-12-06 13:22:52.986784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:46.647 [2024-12-06 13:22:52.986794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.647 [2024-12-06 13:22:52.986804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:46.647 [2024-12-06 13:22:52.986816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:46.647 [2024-12-06 13:22:52.986828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.647 [2024-12-06 13:22:52.986854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:46.647 [2024-12-06 13:22:52.986868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:46.647 [2024-12-06 13:22:52.986878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.647 [2024-12-06 13:22:52.986889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:46.647 [2024-12-06 13:22:52.986899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:46.647 [2024-12-06 13:22:52.986909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.647 [2024-12-06 13:22:52.986919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:46.647 [2024-12-06 13:22:52.986941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:46.647 [2024-12-06 13:22:52.986951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:46.647 [2024-12-06 13:22:52.986961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:46.647 [2024-12-06 13:22:52.986971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:46.647 [2024-12-06 13:22:52.986981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:46.647 [2024-12-06 13:22:52.986991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:46.647 [2024-12-06 13:22:52.987001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:46.647 [2024-12-06 13:22:52.987011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.647 [2024-12-06 13:22:52.987021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:46.647 [2024-12-06 13:22:52.987031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:46.647 [2024-12-06 13:22:52.987041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.648 [2024-12-06 13:22:52.987051] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:46.648 [2024-12-06 13:22:52.987063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:46.648 [2024-12-06 13:22:52.987079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:46.648 [2024-12-06 13:22:52.987090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.648 [2024-12-06 13:22:52.987102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:46.648 [2024-12-06 13:22:52.987112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:46.648 [2024-12-06 13:22:52.987122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:46.648 [2024-12-06 13:22:52.987133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:46.648 [2024-12-06 13:22:52.987142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:46.648 [2024-12-06 13:22:52.987153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:46.648 [2024-12-06 13:22:52.987165] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:46.648 [2024-12-06 13:22:52.987179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:46.648 [2024-12-06 13:22:52.987192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:46.648 [2024-12-06 13:22:52.987204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:46.648 [2024-12-06 13:22:52.987216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:46.648 [2024-12-06 13:22:52.987227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:46.648 [2024-12-06 13:22:52.987238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:46.648 [2024-12-06 13:22:52.987252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:46.648 [2024-12-06 13:22:52.987271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:46.648 [2024-12-06 13:22:52.987284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:46.648 [2024-12-06 13:22:52.987295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:46.648 [2024-12-06 13:22:52.987306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:46.648 [2024-12-06 13:22:52.987317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:46.648 [2024-12-06 13:22:52.987328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:46.648 [2024-12-06 13:22:52.987339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:46.648 [2024-12-06 13:22:52.987350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:46.648 [2024-12-06 13:22:52.987361] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:46.648 [2024-12-06 13:22:52.987374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:46.648 [2024-12-06 13:22:52.987386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:46.648 [2024-12-06 13:22:52.987397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:46.648 [2024-12-06 13:22:52.987408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:46.648 [2024-12-06 13:22:52.987419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:46.648 [2024-12-06 13:22:52.987432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:52.987450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:46.648 [2024-12-06 13:22:52.987462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:28:46.648 [2024-12-06 13:22:52.987473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.648 [2024-12-06 13:22:53.021809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:53.021895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:46.648 [2024-12-06 13:22:53.021918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.234 ms 00:28:46.648 [2024-12-06 13:22:53.021931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.648 [2024-12-06 13:22:53.022143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:53.022166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:46.648 [2024-12-06 13:22:53.022179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:46.648 [2024-12-06 13:22:53.022192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.648 [2024-12-06 13:22:53.082337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:53.082429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:46.648 [2024-12-06 13:22:53.082469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.109 ms 00:28:46.648 [2024-12-06 13:22:53.082482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.648 [2024-12-06 13:22:53.082656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:53.082679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:46.648 [2024-12-06 13:22:53.082693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:46.648 [2024-12-06 13:22:53.082704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.648 [2024-12-06 13:22:53.083078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:53.083099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:46.648 [2024-12-06 13:22:53.083122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:28:46.648 [2024-12-06 13:22:53.083133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.648 [2024-12-06 13:22:53.083324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:53.083351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:46.648 [2024-12-06 13:22:53.083365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:28:46.648 [2024-12-06 13:22:53.083379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.648 [2024-12-06 13:22:53.101638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:53.101708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:46.648 [2024-12-06 13:22:53.101729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.205 ms 00:28:46.648 [2024-12-06 13:22:53.101742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.648 [2024-12-06 13:22:53.120084] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:46.648 [2024-12-06 13:22:53.120162] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:46.648 [2024-12-06 13:22:53.120197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:53.120214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:46.648 [2024-12-06 13:22:53.120229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.220 ms 00:28:46.648 [2024-12-06 13:22:53.120240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.648 [2024-12-06 13:22:53.151436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:53.151556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:46.648 [2024-12-06 13:22:53.151582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.992 ms 00:28:46.648 [2024-12-06 13:22:53.151596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.648 [2024-12-06 13:22:53.169352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.648 [2024-12-06 13:22:53.169432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:46.648 [2024-12-06 13:22:53.169453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.570 ms 00:28:46.648 [2024-12-06 13:22:53.169465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.186246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.907 [2024-12-06 13:22:53.186318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:46.907 [2024-12-06 13:22:53.186338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.612 ms 00:28:46.907 [2024-12-06 13:22:53.186350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.187387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.907 [2024-12-06 13:22:53.187432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:46.907 [2024-12-06 13:22:53.187450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:28:46.907 [2024-12-06 13:22:53.187462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.265621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.907 [2024-12-06 13:22:53.265715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:46.907 [2024-12-06 13:22:53.265738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.115 ms 00:28:46.907 [2024-12-06 13:22:53.265750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.279338] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:46.907 [2024-12-06 13:22:53.294652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.907 [2024-12-06 13:22:53.294745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:46.907 [2024-12-06 13:22:53.294767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.684 ms 00:28:46.907 [2024-12-06 13:22:53.294791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.295032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.907 [2024-12-06 13:22:53.295084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:46.907 [2024-12-06 13:22:53.295112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:46.907 [2024-12-06 13:22:53.295133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.295214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.907 [2024-12-06 13:22:53.295243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:46.907 [2024-12-06 13:22:53.295266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:46.907 [2024-12-06 13:22:53.295296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.295375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.907 [2024-12-06 13:22:53.295416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:46.907 [2024-12-06 13:22:53.295440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:46.907 [2024-12-06 13:22:53.295461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.295561] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:46.907 [2024-12-06 13:22:53.295593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.907 [2024-12-06 13:22:53.295616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:46.907 [2024-12-06 13:22:53.295640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:46.907 [2024-12-06 13:22:53.295661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.331247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.907 [2024-12-06 13:22:53.331347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:46.907 [2024-12-06 13:22:53.331369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.527 ms 00:28:46.907 [2024-12-06 13:22:53.331382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.331633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.907 [2024-12-06 13:22:53.331658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:46.907 [2024-12-06 13:22:53.331673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:46.907 [2024-12-06 13:22:53.331685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.907 [2024-12-06 13:22:53.332758] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:46.907 [2024-12-06 13:22:53.337593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.147 ms, result 0 00:28:46.907 [2024-12-06 13:22:53.338800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:46.907 [2024-12-06 13:22:53.358425] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:48.280  [2024-12-06T13:22:55.742Z] Copying: 25/256 [MB] (25 MBps) [2024-12-06T13:22:56.677Z] Copying: 49/256 [MB] (23 MBps) [2024-12-06T13:22:57.611Z] Copying: 72/256 [MB] (23 MBps) [2024-12-06T13:22:58.557Z] Copying: 96/256 [MB] (23 MBps) [2024-12-06T13:22:59.489Z] Copying: 120/256 [MB] (24 MBps) [2024-12-06T13:23:00.421Z] Copying: 145/256 [MB] (24 MBps) [2024-12-06T13:23:01.794Z] Copying: 166/256 [MB] (21 MBps) [2024-12-06T13:23:02.727Z] Copying: 188/256 [MB] (22 MBps) [2024-12-06T13:23:03.757Z] Copying: 213/256 [MB] (24 MBps) [2024-12-06T13:23:04.339Z] Copying: 238/256 [MB] (25 MBps) [2024-12-06T13:23:04.598Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-06 13:23:04.460120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:58.070 [2024-12-06 13:23:04.474127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.070 [2024-12-06 13:23:04.474302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:58.070 [2024-12-06 13:23:04.474338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:58.070 [2024-12-06 13:23:04.474351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.070 [2024-12-06 13:23:04.474387] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:58.070 [2024-12-06 13:23:04.477749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.070 [2024-12-06 13:23:04.477790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:58.070 [2024-12-06 13:23:04.477807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:28:58.070 [2024-12-06 13:23:04.477818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.070 [2024-12-06 13:23:04.478176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.070 [2024-12-06 13:23:04.478216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:58.070 [2024-12-06 13:23:04.478233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:28:58.070 [2024-12-06 13:23:04.478244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.070 [2024-12-06 13:23:04.482039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.070 [2024-12-06 13:23:04.482097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:58.070 [2024-12-06 13:23:04.482113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.763 ms 00:28:58.070 [2024-12-06 13:23:04.482124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.070 [2024-12-06 13:23:04.490205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.070 [2024-12-06 13:23:04.490260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:58.070 [2024-12-06 13:23:04.490277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.049 ms 00:28:58.070 [2024-12-06 13:23:04.490288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.070 [2024-12-06 13:23:04.523630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.070 [2024-12-06 13:23:04.523705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:58.070 [2024-12-06 13:23:04.523727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.229 ms 00:28:58.070 [2024-12-06 13:23:04.523739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.070 [2024-12-06 13:23:04.542866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.070 [2024-12-06 13:23:04.542941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:58.070 [2024-12-06 13:23:04.542977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.031 ms 00:28:58.070 [2024-12-06 13:23:04.542989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.070 [2024-12-06 13:23:04.543213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.070 [2024-12-06 13:23:04.543235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:58.070 [2024-12-06 13:23:04.543263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:58.070 [2024-12-06 13:23:04.543274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.070 [2024-12-06 13:23:04.576632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.070 [2024-12-06 13:23:04.576709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:58.070 [2024-12-06 13:23:04.576728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.330 ms 00:28:58.070 [2024-12-06 13:23:04.576740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.330 [2024-12-06 13:23:04.609140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.330 [2024-12-06 13:23:04.609232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:58.330 [2024-12-06 13:23:04.609259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.308 ms 00:28:58.330 [2024-12-06 13:23:04.609271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.330 [2024-12-06 13:23:04.642613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.330 [2024-12-06 13:23:04.642689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:58.330 [2024-12-06 13:23:04.642710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.251 ms 00:28:58.330 [2024-12-06 13:23:04.642721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.330 [2024-12-06 13:23:04.677759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.330 [2024-12-06 13:23:04.677874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:58.330 [2024-12-06 13:23:04.677898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.878 ms 00:28:58.330 [2024-12-06 13:23:04.677910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.330 [2024-12-06 13:23:04.678010] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:58.330 [2024-12-06 13:23:04.678050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:58.330 [2024-12-06 13:23:04.678725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.678997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:58.331 [2024-12-06 13:23:04.679316] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:58.331 [2024-12-06 13:23:04.679330] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9c5936a-1bb5-432f-b1c3-6cf254b3be43 00:28:58.331 [2024-12-06 13:23:04.679341] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:58.331 [2024-12-06 13:23:04.679352] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:58.331 [2024-12-06 13:23:04.679363] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:58.331 [2024-12-06 13:23:04.679374] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:58.331 [2024-12-06 13:23:04.679384] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:58.331 [2024-12-06 13:23:04.679395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:58.331 [2024-12-06 13:23:04.679412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:58.331 [2024-12-06 13:23:04.679422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:58.331 [2024-12-06 13:23:04.679432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:58.331 [2024-12-06 13:23:04.679444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.331 [2024-12-06 13:23:04.679455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:58.331 [2024-12-06 13:23:04.679467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:28:58.331 [2024-12-06 13:23:04.679478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.331 [2024-12-06 13:23:04.696337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.331 [2024-12-06 13:23:04.696397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:58.331 [2024-12-06 13:23:04.696417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.825 ms 00:28:58.331 [2024-12-06 13:23:04.696429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.331 [2024-12-06 13:23:04.696946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.331 [2024-12-06 13:23:04.696972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:58.331 [2024-12-06 13:23:04.696987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:28:58.331 [2024-12-06 13:23:04.696998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.331 [2024-12-06 13:23:04.745899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.331 [2024-12-06 13:23:04.746001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:58.331 [2024-12-06 13:23:04.746037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.331 [2024-12-06 13:23:04.746074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.331 [2024-12-06 13:23:04.746213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.331 [2024-12-06 13:23:04.746235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:58.331 [2024-12-06 13:23:04.746258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.331 [2024-12-06 13:23:04.746276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.331 [2024-12-06 13:23:04.746364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.331 [2024-12-06 13:23:04.746384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:58.331 [2024-12-06 13:23:04.746397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.331 [2024-12-06 13:23:04.746409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.331 [2024-12-06 13:23:04.746441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.331 [2024-12-06 13:23:04.746456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:58.331 [2024-12-06 13:23:04.746467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.331 [2024-12-06 13:23:04.746479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.331 [2024-12-06 13:23:04.853318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.331 [2024-12-06 13:23:04.853439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:58.331 [2024-12-06 13:23:04.853470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.331 [2024-12-06 13:23:04.853493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.591 [2024-12-06 13:23:04.943937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.591 [2024-12-06 13:23:04.944010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:58.591 [2024-12-06 13:23:04.944030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.591 [2024-12-06 13:23:04.944042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.591 [2024-12-06 13:23:04.944132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.591 [2024-12-06 13:23:04.944150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:58.591 [2024-12-06 13:23:04.944162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.591 [2024-12-06 13:23:04.944173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.591 [2024-12-06 13:23:04.944208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.591 [2024-12-06 13:23:04.944234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:58.591 [2024-12-06 13:23:04.944247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.591 [2024-12-06 13:23:04.944258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.591 [2024-12-06 13:23:04.944383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.591 [2024-12-06 13:23:04.944402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:58.591 [2024-12-06 13:23:04.944415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.591 [2024-12-06 13:23:04.944426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.591 [2024-12-06 13:23:04.944479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.591 [2024-12-06 13:23:04.944498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:58.591 [2024-12-06 13:23:04.944517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.591 [2024-12-06 13:23:04.944528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.591 [2024-12-06 13:23:04.944575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.591 [2024-12-06 13:23:04.944590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:58.591 [2024-12-06 13:23:04.944602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.591 [2024-12-06 13:23:04.944613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.591 [2024-12-06 13:23:04.944665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.591 [2024-12-06 13:23:04.944689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:58.591 [2024-12-06 13:23:04.944701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.591 [2024-12-06 13:23:04.944711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.591 [2024-12-06 13:23:04.944910] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 470.780 ms, result 0 00:28:59.524 00:28:59.524 00:28:59.524 13:23:05 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:00.089 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:29:00.089 13:23:06 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:29:00.089 13:23:06 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:29:00.089 13:23:06 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:00.089 13:23:06 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:00.089 13:23:06 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:29:00.347 13:23:06 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:29:00.347 13:23:06 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79190 00:29:00.347 13:23:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79190 ']' 00:29:00.347 13:23:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79190 00:29:00.347 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79190) - No such process 00:29:00.347 Process with pid 79190 is not found 00:29:00.347 13:23:06 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79190 is not found' 00:29:00.347 00:29:00.347 real 1m12.953s 00:29:00.347 user 1m44.558s 00:29:00.347 sys 0m7.708s 00:29:00.347 13:23:06 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.347 ************************************ 00:29:00.347 13:23:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:29:00.347 END TEST ftl_trim 00:29:00.347 ************************************ 00:29:00.347 13:23:06 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:29:00.347 13:23:06 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:00.347 13:23:06 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.347 13:23:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:00.347 ************************************ 00:29:00.347 START TEST ftl_restore 00:29:00.347 ************************************ 00:29:00.347 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:29:00.347 * Looking for test storage... 00:29:00.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:00.347 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:00.347 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:29:00.347 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:00.607 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.607 13:23:06 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:29:00.607 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.607 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:00.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.607 --rc genhtml_branch_coverage=1 00:29:00.607 --rc genhtml_function_coverage=1 00:29:00.607 --rc genhtml_legend=1 00:29:00.607 --rc geninfo_all_blocks=1 00:29:00.607 --rc geninfo_unexecuted_blocks=1 00:29:00.607 00:29:00.607 ' 00:29:00.607 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:00.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.607 --rc genhtml_branch_coverage=1 00:29:00.607 --rc genhtml_function_coverage=1 00:29:00.607 --rc genhtml_legend=1 00:29:00.607 --rc geninfo_all_blocks=1 00:29:00.607 --rc geninfo_unexecuted_blocks=1 00:29:00.607 00:29:00.607 ' 00:29:00.607 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:00.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.607 --rc genhtml_branch_coverage=1 00:29:00.607 --rc genhtml_function_coverage=1 00:29:00.607 --rc genhtml_legend=1 00:29:00.607 --rc geninfo_all_blocks=1 00:29:00.607 --rc geninfo_unexecuted_blocks=1 00:29:00.607 00:29:00.607 ' 00:29:00.607 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:00.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.607 --rc genhtml_branch_coverage=1 00:29:00.607 --rc genhtml_function_coverage=1 00:29:00.607 --rc genhtml_legend=1 00:29:00.608 --rc geninfo_all_blocks=1 00:29:00.608 --rc geninfo_unexecuted_blocks=1 00:29:00.608 00:29:00.608 ' 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.BUPTdTM7hW 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79459 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:00.608 13:23:06 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79459 00:29:00.608 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79459 ']' 00:29:00.608 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.608 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.608 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.608 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.608 13:23:06 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:29:00.608 [2024-12-06 13:23:07.073492] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:29:00.608 [2024-12-06 13:23:07.073955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79459 ] 00:29:00.866 [2024-12-06 13:23:07.267892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.866 [2024-12-06 13:23:07.374654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.815 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.815 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:29:01.815 13:23:08 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:01.815 13:23:08 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:29:01.815 13:23:08 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:01.815 13:23:08 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:29:01.815 13:23:08 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:29:01.815 13:23:08 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:02.381 13:23:08 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:02.381 13:23:08 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:29:02.381 13:23:08 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:02.381 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:02.381 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:02.381 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:29:02.381 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:29:02.381 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:02.639 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:02.639 { 00:29:02.639 "name": "nvme0n1", 00:29:02.639 "aliases": [ 00:29:02.639 "e0a7946f-429c-4df1-98d8-51a6d10d8f99" 00:29:02.639 ], 00:29:02.639 "product_name": "NVMe disk", 00:29:02.639 "block_size": 4096, 00:29:02.639 "num_blocks": 1310720, 00:29:02.639 "uuid": "e0a7946f-429c-4df1-98d8-51a6d10d8f99", 00:29:02.639 "numa_id": -1, 00:29:02.639 "assigned_rate_limits": { 00:29:02.639 "rw_ios_per_sec": 0, 00:29:02.639 "rw_mbytes_per_sec": 0, 00:29:02.639 "r_mbytes_per_sec": 0, 00:29:02.639 "w_mbytes_per_sec": 0 00:29:02.639 }, 00:29:02.639 "claimed": true, 00:29:02.639 "claim_type": "read_many_write_one", 00:29:02.639 "zoned": false, 00:29:02.639 "supported_io_types": { 00:29:02.639 "read": true, 00:29:02.639 "write": true, 00:29:02.639 "unmap": true, 00:29:02.639 "flush": true, 00:29:02.639 "reset": true, 00:29:02.639 "nvme_admin": true, 00:29:02.639 "nvme_io": true, 00:29:02.639 "nvme_io_md": false, 00:29:02.639 "write_zeroes": true, 00:29:02.639 "zcopy": false, 00:29:02.639 "get_zone_info": false, 00:29:02.639 "zone_management": false, 00:29:02.639 "zone_append": false, 00:29:02.639 "compare": true, 00:29:02.639 "compare_and_write": false, 00:29:02.639 "abort": true, 00:29:02.639 "seek_hole": false, 00:29:02.639 "seek_data": false, 00:29:02.639 "copy": true, 00:29:02.639 "nvme_iov_md": false 00:29:02.639 }, 00:29:02.639 "driver_specific": { 00:29:02.639 "nvme": [ 00:29:02.639 { 00:29:02.639 "pci_address": "0000:00:11.0", 00:29:02.639 "trid": { 00:29:02.639 "trtype": "PCIe", 00:29:02.639 "traddr": "0000:00:11.0" 00:29:02.639 }, 00:29:02.639 "ctrlr_data": { 00:29:02.639 "cntlid": 0, 00:29:02.639 "vendor_id": "0x1b36", 00:29:02.639 "model_number": "QEMU NVMe Ctrl", 00:29:02.639 "serial_number": "12341", 00:29:02.639 "firmware_revision": "8.0.0", 00:29:02.639 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:02.639 "oacs": { 00:29:02.639 "security": 0, 00:29:02.639 "format": 1, 00:29:02.639 "firmware": 0, 00:29:02.639 "ns_manage": 1 00:29:02.639 }, 00:29:02.639 "multi_ctrlr": false, 00:29:02.639 "ana_reporting": false 00:29:02.639 }, 00:29:02.639 "vs": { 00:29:02.639 "nvme_version": "1.4" 00:29:02.639 }, 00:29:02.639 "ns_data": { 00:29:02.639 "id": 1, 00:29:02.639 "can_share": false 00:29:02.639 } 00:29:02.639 } 00:29:02.639 ], 00:29:02.639 "mp_policy": "active_passive" 00:29:02.639 } 00:29:02.639 } 00:29:02.639 ]' 00:29:02.639 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:02.639 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:29:02.639 13:23:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:02.639 13:23:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:02.639 13:23:09 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:02.639 13:23:09 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:29:02.639 13:23:09 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:29:02.639 13:23:09 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:02.639 13:23:09 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:29:02.639 13:23:09 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:02.639 13:23:09 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:02.898 13:23:09 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=d25c70b7-3691-4cdd-b10f-ea16a1423d35 00:29:02.898 13:23:09 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:29:02.898 13:23:09 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d25c70b7-3691-4cdd-b10f-ea16a1423d35 00:29:03.156 13:23:09 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:03.414 13:23:09 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=92c3b395-39ac-49da-b1ed-3e9284f5b174 00:29:03.414 13:23:09 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 92c3b395-39ac-49da-b1ed-3e9284f5b174 00:29:03.982 13:23:10 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:03.982 13:23:10 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:03.982 13:23:10 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:03.982 13:23:10 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:29:03.982 13:23:10 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:03.982 13:23:10 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:03.982 13:23:10 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:29:03.982 13:23:10 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:03.982 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:03.982 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:03.982 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:29:03.982 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:29:03.982 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:03.982 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:03.982 { 00:29:03.982 "name": "f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48", 00:29:03.982 "aliases": [ 00:29:03.982 "lvs/nvme0n1p0" 00:29:03.982 ], 00:29:03.982 "product_name": "Logical Volume", 00:29:03.982 "block_size": 4096, 00:29:03.982 "num_blocks": 26476544, 00:29:03.982 "uuid": "f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48", 00:29:03.982 "assigned_rate_limits": { 00:29:03.982 "rw_ios_per_sec": 0, 00:29:03.982 "rw_mbytes_per_sec": 0, 00:29:03.982 "r_mbytes_per_sec": 0, 00:29:03.982 "w_mbytes_per_sec": 0 00:29:03.982 }, 00:29:03.982 "claimed": false, 00:29:03.982 "zoned": false, 00:29:03.982 "supported_io_types": { 00:29:03.982 "read": true, 00:29:03.982 "write": true, 00:29:03.982 "unmap": true, 00:29:03.982 "flush": false, 00:29:03.982 "reset": true, 00:29:03.982 "nvme_admin": false, 00:29:03.982 "nvme_io": false, 00:29:03.982 "nvme_io_md": false, 00:29:03.982 "write_zeroes": true, 00:29:03.982 "zcopy": false, 00:29:03.982 "get_zone_info": false, 00:29:03.982 "zone_management": false, 00:29:03.982 "zone_append": false, 00:29:03.982 "compare": false, 00:29:03.982 "compare_and_write": false, 00:29:03.982 "abort": false, 00:29:03.982 "seek_hole": true, 00:29:03.982 "seek_data": true, 00:29:03.982 "copy": false, 00:29:03.982 "nvme_iov_md": false 00:29:03.982 }, 00:29:03.982 "driver_specific": { 00:29:03.982 "lvol": { 00:29:03.982 "lvol_store_uuid": "92c3b395-39ac-49da-b1ed-3e9284f5b174", 00:29:03.982 "base_bdev": "nvme0n1", 00:29:03.982 "thin_provision": true, 00:29:03.982 "num_allocated_clusters": 0, 00:29:03.982 "snapshot": false, 00:29:03.982 "clone": false, 00:29:03.982 "esnap_clone": false 00:29:03.982 } 00:29:03.982 } 00:29:03.982 } 00:29:03.982 ]' 00:29:03.982 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:04.241 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:29:04.241 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:04.241 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:04.241 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:04.241 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:29:04.241 13:23:10 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:29:04.241 13:23:10 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:29:04.241 13:23:10 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:04.499 13:23:10 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:04.499 13:23:10 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:04.499 13:23:10 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:04.499 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:04.499 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:04.499 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:29:04.499 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:29:04.499 13:23:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:04.756 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:04.756 { 00:29:04.756 "name": "f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48", 00:29:04.756 "aliases": [ 00:29:04.756 "lvs/nvme0n1p0" 00:29:04.756 ], 00:29:04.756 "product_name": "Logical Volume", 00:29:04.756 "block_size": 4096, 00:29:04.756 "num_blocks": 26476544, 00:29:04.756 "uuid": "f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48", 00:29:04.756 "assigned_rate_limits": { 00:29:04.756 "rw_ios_per_sec": 0, 00:29:04.756 "rw_mbytes_per_sec": 0, 00:29:04.756 "r_mbytes_per_sec": 0, 00:29:04.756 "w_mbytes_per_sec": 0 00:29:04.756 }, 00:29:04.756 "claimed": false, 00:29:04.756 "zoned": false, 00:29:04.756 "supported_io_types": { 00:29:04.757 "read": true, 00:29:04.757 "write": true, 00:29:04.757 "unmap": true, 00:29:04.757 "flush": false, 00:29:04.757 "reset": true, 00:29:04.757 "nvme_admin": false, 00:29:04.757 "nvme_io": false, 00:29:04.757 "nvme_io_md": false, 00:29:04.757 "write_zeroes": true, 00:29:04.757 "zcopy": false, 00:29:04.757 "get_zone_info": false, 00:29:04.757 "zone_management": false, 00:29:04.757 "zone_append": false, 00:29:04.757 "compare": false, 00:29:04.757 "compare_and_write": false, 00:29:04.757 "abort": false, 00:29:04.757 "seek_hole": true, 00:29:04.757 "seek_data": true, 00:29:04.757 "copy": false, 00:29:04.757 "nvme_iov_md": false 00:29:04.757 }, 00:29:04.757 "driver_specific": { 00:29:04.757 "lvol": { 00:29:04.757 "lvol_store_uuid": "92c3b395-39ac-49da-b1ed-3e9284f5b174", 00:29:04.757 "base_bdev": "nvme0n1", 00:29:04.757 "thin_provision": true, 00:29:04.757 "num_allocated_clusters": 0, 00:29:04.757 "snapshot": false, 00:29:04.757 "clone": false, 00:29:04.757 "esnap_clone": false 00:29:04.757 } 00:29:04.757 } 00:29:04.757 } 00:29:04.757 ]' 00:29:04.757 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:04.757 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:29:04.757 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:05.015 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:05.015 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:05.015 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:29:05.015 13:23:11 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:29:05.015 13:23:11 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:05.272 13:23:11 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:05.272 13:23:11 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:05.272 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:05.272 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:05.272 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:29:05.272 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:29:05.272 13:23:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 00:29:05.530 13:23:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:05.530 { 00:29:05.530 "name": "f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48", 00:29:05.530 "aliases": [ 00:29:05.530 "lvs/nvme0n1p0" 00:29:05.530 ], 00:29:05.530 "product_name": "Logical Volume", 00:29:05.530 "block_size": 4096, 00:29:05.530 "num_blocks": 26476544, 00:29:05.530 "uuid": "f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48", 00:29:05.530 "assigned_rate_limits": { 00:29:05.530 "rw_ios_per_sec": 0, 00:29:05.530 "rw_mbytes_per_sec": 0, 00:29:05.530 "r_mbytes_per_sec": 0, 00:29:05.530 "w_mbytes_per_sec": 0 00:29:05.530 }, 00:29:05.530 "claimed": false, 00:29:05.530 "zoned": false, 00:29:05.530 "supported_io_types": { 00:29:05.530 "read": true, 00:29:05.530 "write": true, 00:29:05.530 "unmap": true, 00:29:05.530 "flush": false, 00:29:05.530 "reset": true, 00:29:05.530 "nvme_admin": false, 00:29:05.530 "nvme_io": false, 00:29:05.530 "nvme_io_md": false, 00:29:05.530 "write_zeroes": true, 00:29:05.530 "zcopy": false, 00:29:05.530 "get_zone_info": false, 00:29:05.530 "zone_management": false, 00:29:05.530 "zone_append": false, 00:29:05.530 "compare": false, 00:29:05.530 "compare_and_write": false, 00:29:05.530 "abort": false, 00:29:05.530 "seek_hole": true, 00:29:05.530 "seek_data": true, 00:29:05.530 "copy": false, 00:29:05.530 "nvme_iov_md": false 00:29:05.530 }, 00:29:05.530 "driver_specific": { 00:29:05.530 "lvol": { 00:29:05.530 "lvol_store_uuid": "92c3b395-39ac-49da-b1ed-3e9284f5b174", 00:29:05.530 "base_bdev": "nvme0n1", 00:29:05.530 "thin_provision": true, 00:29:05.530 "num_allocated_clusters": 0, 00:29:05.530 "snapshot": false, 00:29:05.530 "clone": false, 00:29:05.530 "esnap_clone": false 00:29:05.530 } 00:29:05.530 } 00:29:05.530 } 00:29:05.530 ]' 00:29:05.530 13:23:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:05.788 13:23:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:29:05.788 13:23:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:05.788 13:23:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:05.788 13:23:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:05.788 13:23:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:29:05.788 13:23:12 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:05.788 13:23:12 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 --l2p_dram_limit 10' 00:29:05.788 13:23:12 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:05.788 13:23:12 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:05.788 13:23:12 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:05.788 13:23:12 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:29:05.788 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:29:05.788 13:23:12 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f2fd2cbf-f1b8-44a4-89de-4e3038fe9c48 --l2p_dram_limit 10 -c nvc0n1p0 00:29:06.046 [2024-12-06 13:23:12.357246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.046 [2024-12-06 13:23:12.357511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:06.046 [2024-12-06 13:23:12.357551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:06.046 [2024-12-06 13:23:12.357566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.046 [2024-12-06 13:23:12.357660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.046 [2024-12-06 13:23:12.357680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:06.046 [2024-12-06 13:23:12.357696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:06.046 [2024-12-06 13:23:12.357708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.046 [2024-12-06 13:23:12.357750] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:06.046 [2024-12-06 13:23:12.358778] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:06.046 [2024-12-06 13:23:12.358814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.046 [2024-12-06 13:23:12.358828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:06.046 [2024-12-06 13:23:12.358857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:29:06.046 [2024-12-06 13:23:12.358872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.046 [2024-12-06 13:23:12.358997] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d953d101-c147-4f86-bca3-652dd3007b5e 00:29:06.046 [2024-12-06 13:23:12.360082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.046 [2024-12-06 13:23:12.360128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:06.046 [2024-12-06 13:23:12.360146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:06.046 [2024-12-06 13:23:12.360160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.046 [2024-12-06 13:23:12.364898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.046 [2024-12-06 13:23:12.364969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:06.046 [2024-12-06 13:23:12.364988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.675 ms 00:29:06.046 [2024-12-06 13:23:12.365002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.046 [2024-12-06 13:23:12.365142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.046 [2024-12-06 13:23:12.365166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:06.046 [2024-12-06 13:23:12.365180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:29:06.046 [2024-12-06 13:23:12.365199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.046 [2024-12-06 13:23:12.365307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.046 [2024-12-06 13:23:12.365333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:06.046 [2024-12-06 13:23:12.365351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:06.046 [2024-12-06 13:23:12.365365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.046 [2024-12-06 13:23:12.365398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:06.046 [2024-12-06 13:23:12.370130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.046 [2024-12-06 13:23:12.370175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:06.046 [2024-12-06 13:23:12.370196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.736 ms 00:29:06.047 [2024-12-06 13:23:12.370209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.047 [2024-12-06 13:23:12.370261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.047 [2024-12-06 13:23:12.370277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:06.047 [2024-12-06 13:23:12.370292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:06.047 [2024-12-06 13:23:12.370303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.047 [2024-12-06 13:23:12.370353] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:06.047 [2024-12-06 13:23:12.370522] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:06.047 [2024-12-06 13:23:12.370547] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:06.047 [2024-12-06 13:23:12.370563] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:06.047 [2024-12-06 13:23:12.370581] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:06.047 [2024-12-06 13:23:12.370594] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:06.047 [2024-12-06 13:23:12.370608] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:06.047 [2024-12-06 13:23:12.370619] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:06.047 [2024-12-06 13:23:12.370638] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:06.047 [2024-12-06 13:23:12.370649] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:06.047 [2024-12-06 13:23:12.370663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.047 [2024-12-06 13:23:12.370687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:06.047 [2024-12-06 13:23:12.370703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:29:06.047 [2024-12-06 13:23:12.370715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.047 [2024-12-06 13:23:12.370816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.047 [2024-12-06 13:23:12.370830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:06.047 [2024-12-06 13:23:12.370876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:29:06.047 [2024-12-06 13:23:12.370901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.047 [2024-12-06 13:23:12.371033] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:06.047 [2024-12-06 13:23:12.371052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:06.047 [2024-12-06 13:23:12.371067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:06.047 [2024-12-06 13:23:12.371079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:06.047 [2024-12-06 13:23:12.371104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:06.047 [2024-12-06 13:23:12.371128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:06.047 [2024-12-06 13:23:12.371141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:06.047 [2024-12-06 13:23:12.371167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:06.047 [2024-12-06 13:23:12.371178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:06.047 [2024-12-06 13:23:12.371191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:06.047 [2024-12-06 13:23:12.371201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:06.047 [2024-12-06 13:23:12.371214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:06.047 [2024-12-06 13:23:12.371225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:06.047 [2024-12-06 13:23:12.371251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:06.047 [2024-12-06 13:23:12.371264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:06.047 [2024-12-06 13:23:12.371287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.047 [2024-12-06 13:23:12.371311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:06.047 [2024-12-06 13:23:12.371324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.047 [2024-12-06 13:23:12.371347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:06.047 [2024-12-06 13:23:12.371360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.047 [2024-12-06 13:23:12.371382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:06.047 [2024-12-06 13:23:12.371393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.047 [2024-12-06 13:23:12.371416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:06.047 [2024-12-06 13:23:12.371431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:06.047 [2024-12-06 13:23:12.371455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:06.047 [2024-12-06 13:23:12.371465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:06.047 [2024-12-06 13:23:12.371480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:06.047 [2024-12-06 13:23:12.371490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:06.047 [2024-12-06 13:23:12.371503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:06.047 [2024-12-06 13:23:12.371526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:06.047 [2024-12-06 13:23:12.371570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:06.047 [2024-12-06 13:23:12.371592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371605] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:06.047 [2024-12-06 13:23:12.371620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:06.047 [2024-12-06 13:23:12.371631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:06.047 [2024-12-06 13:23:12.371646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.047 [2024-12-06 13:23:12.371657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:06.047 [2024-12-06 13:23:12.371672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:06.047 [2024-12-06 13:23:12.371690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:06.047 [2024-12-06 13:23:12.371703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:06.047 [2024-12-06 13:23:12.371713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:06.047 [2024-12-06 13:23:12.371726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:06.047 [2024-12-06 13:23:12.371739] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:06.047 [2024-12-06 13:23:12.371758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:06.047 [2024-12-06 13:23:12.371772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:06.047 [2024-12-06 13:23:12.371786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:06.047 [2024-12-06 13:23:12.371798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:06.047 [2024-12-06 13:23:12.371811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:06.047 [2024-12-06 13:23:12.371823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:06.047 [2024-12-06 13:23:12.371836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:06.047 [2024-12-06 13:23:12.371864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:06.047 [2024-12-06 13:23:12.371882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:06.047 [2024-12-06 13:23:12.371893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:06.047 [2024-12-06 13:23:12.371909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:06.047 [2024-12-06 13:23:12.371920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:06.047 [2024-12-06 13:23:12.371934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:06.047 [2024-12-06 13:23:12.371945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:06.047 [2024-12-06 13:23:12.371959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:06.047 [2024-12-06 13:23:12.371970] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:06.047 [2024-12-06 13:23:12.371985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:06.047 [2024-12-06 13:23:12.371998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:06.047 [2024-12-06 13:23:12.372011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:06.047 [2024-12-06 13:23:12.372023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:06.048 [2024-12-06 13:23:12.372037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:06.048 [2024-12-06 13:23:12.372049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.048 [2024-12-06 13:23:12.372063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:06.048 [2024-12-06 13:23:12.372075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:29:06.048 [2024-12-06 13:23:12.372088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.048 [2024-12-06 13:23:12.372144] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:06.048 [2024-12-06 13:23:12.372366] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:07.950 [2024-12-06 13:23:14.198062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.198335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:07.950 [2024-12-06 13:23:14.198511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1825.929 ms 00:29:07.950 [2024-12-06 13:23:14.198644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.231885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.232144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:07.950 [2024-12-06 13:23:14.232321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.897 ms 00:29:07.950 [2024-12-06 13:23:14.232505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.232737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.232813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:07.950 [2024-12-06 13:23:14.232988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:07.950 [2024-12-06 13:23:14.233059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.274354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.274609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:07.950 [2024-12-06 13:23:14.274741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.091 ms 00:29:07.950 [2024-12-06 13:23:14.274901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.275003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.275101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:07.950 [2024-12-06 13:23:14.275215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:07.950 [2024-12-06 13:23:14.275283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.275874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.276034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:07.950 [2024-12-06 13:23:14.276148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:29:07.950 [2024-12-06 13:23:14.276204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.276443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.276592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:07.950 [2024-12-06 13:23:14.276710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:29:07.950 [2024-12-06 13:23:14.276768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.294977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.295185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:07.950 [2024-12-06 13:23:14.295365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.075 ms 00:29:07.950 [2024-12-06 13:23:14.295425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.323854] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:07.950 [2024-12-06 13:23:14.326853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.327009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:07.950 [2024-12-06 13:23:14.327138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.067 ms 00:29:07.950 [2024-12-06 13:23:14.327193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.384066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.384135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:07.950 [2024-12-06 13:23:14.384161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.666 ms 00:29:07.950 [2024-12-06 13:23:14.384174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.384402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.384426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:07.950 [2024-12-06 13:23:14.384445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:29:07.950 [2024-12-06 13:23:14.384457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.416208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.416269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:07.950 [2024-12-06 13:23:14.416294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.669 ms 00:29:07.950 [2024-12-06 13:23:14.416307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.447621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.447835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:07.950 [2024-12-06 13:23:14.447887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.247 ms 00:29:07.950 [2024-12-06 13:23:14.447901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.950 [2024-12-06 13:23:14.448651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.950 [2024-12-06 13:23:14.448687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:07.951 [2024-12-06 13:23:14.448706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:29:07.951 [2024-12-06 13:23:14.448721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.209 [2024-12-06 13:23:14.531913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.209 [2024-12-06 13:23:14.531984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:08.209 [2024-12-06 13:23:14.532014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.106 ms 00:29:08.209 [2024-12-06 13:23:14.532028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.209 [2024-12-06 13:23:14.564837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.209 [2024-12-06 13:23:14.564904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:08.209 [2024-12-06 13:23:14.564929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.689 ms 00:29:08.209 [2024-12-06 13:23:14.564942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.209 [2024-12-06 13:23:14.596739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.209 [2024-12-06 13:23:14.596797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:08.209 [2024-12-06 13:23:14.596825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.735 ms 00:29:08.209 [2024-12-06 13:23:14.596837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.209 [2024-12-06 13:23:14.628764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.209 [2024-12-06 13:23:14.628817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:08.209 [2024-12-06 13:23:14.628862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.846 ms 00:29:08.209 [2024-12-06 13:23:14.628879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.209 [2024-12-06 13:23:14.628952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.209 [2024-12-06 13:23:14.628970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:08.209 [2024-12-06 13:23:14.628989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:08.209 [2024-12-06 13:23:14.629001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.209 [2024-12-06 13:23:14.629128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.209 [2024-12-06 13:23:14.629151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:08.209 [2024-12-06 13:23:14.629167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:08.209 [2024-12-06 13:23:14.629178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.209 [2024-12-06 13:23:14.630383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2272.607 ms, result 0 00:29:08.209 { 00:29:08.209 "name": "ftl0", 00:29:08.209 "uuid": "d953d101-c147-4f86-bca3-652dd3007b5e" 00:29:08.209 } 00:29:08.209 13:23:14 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:08.209 13:23:14 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:08.468 13:23:14 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:29:08.468 13:23:14 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:08.726 [2024-12-06 13:23:15.169998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.727 [2024-12-06 13:23:15.170290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:08.727 [2024-12-06 13:23:15.170459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:08.727 [2024-12-06 13:23:15.170637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.727 [2024-12-06 13:23:15.170751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:08.727 [2024-12-06 13:23:15.174908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.727 [2024-12-06 13:23:15.175089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:08.727 [2024-12-06 13:23:15.175225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.919 ms 00:29:08.727 [2024-12-06 13:23:15.175346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.727 [2024-12-06 13:23:15.175758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.727 [2024-12-06 13:23:15.175931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:08.727 [2024-12-06 13:23:15.176069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:29:08.727 [2024-12-06 13:23:15.176123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.727 [2024-12-06 13:23:15.179620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.727 [2024-12-06 13:23:15.179788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:08.727 [2024-12-06 13:23:15.179941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.366 ms 00:29:08.727 [2024-12-06 13:23:15.179996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.727 [2024-12-06 13:23:15.186865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.727 [2024-12-06 13:23:15.187012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:08.727 [2024-12-06 13:23:15.187139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.705 ms 00:29:08.727 [2024-12-06 13:23:15.187274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.727 [2024-12-06 13:23:15.220462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.727 [2024-12-06 13:23:15.220697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:08.727 [2024-12-06 13:23:15.220827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.028 ms 00:29:08.727 [2024-12-06 13:23:15.220984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.727 [2024-12-06 13:23:15.240161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.727 [2024-12-06 13:23:15.240376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:08.727 [2024-12-06 13:23:15.240418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.067 ms 00:29:08.727 [2024-12-06 13:23:15.240433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.727 [2024-12-06 13:23:15.240647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.727 [2024-12-06 13:23:15.240670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:08.727 [2024-12-06 13:23:15.240687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:29:08.727 [2024-12-06 13:23:15.240699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.985 [2024-12-06 13:23:15.274452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.985 [2024-12-06 13:23:15.274527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:08.985 [2024-12-06 13:23:15.274553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.716 ms 00:29:08.985 [2024-12-06 13:23:15.274566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.985 [2024-12-06 13:23:15.306155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.985 [2024-12-06 13:23:15.306220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:08.985 [2024-12-06 13:23:15.306244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.509 ms 00:29:08.985 [2024-12-06 13:23:15.306256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.986 [2024-12-06 13:23:15.337436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.986 [2024-12-06 13:23:15.337669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:08.986 [2024-12-06 13:23:15.337709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.098 ms 00:29:08.986 [2024-12-06 13:23:15.337723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.986 [2024-12-06 13:23:15.369177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.986 [2024-12-06 13:23:15.369245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:08.986 [2024-12-06 13:23:15.369270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.279 ms 00:29:08.986 [2024-12-06 13:23:15.369283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.986 [2024-12-06 13:23:15.369349] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:08.986 [2024-12-06 13:23:15.369375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.369987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:08.986 [2024-12-06 13:23:15.370493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:08.987 [2024-12-06 13:23:15.370825] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:08.987 [2024-12-06 13:23:15.370863] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d953d101-c147-4f86-bca3-652dd3007b5e 00:29:08.987 [2024-12-06 13:23:15.370877] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:08.987 [2024-12-06 13:23:15.370893] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:08.987 [2024-12-06 13:23:15.370906] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:08.987 [2024-12-06 13:23:15.370921] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:08.987 [2024-12-06 13:23:15.370931] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:08.987 [2024-12-06 13:23:15.370945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:08.987 [2024-12-06 13:23:15.370957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:08.987 [2024-12-06 13:23:15.370969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:08.987 [2024-12-06 13:23:15.370979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:08.987 [2024-12-06 13:23:15.370993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.987 [2024-12-06 13:23:15.371005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:08.987 [2024-12-06 13:23:15.371020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:29:08.987 [2024-12-06 13:23:15.371034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.987 [2024-12-06 13:23:15.388012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.987 [2024-12-06 13:23:15.388216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:08.987 [2024-12-06 13:23:15.388253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.885 ms 00:29:08.987 [2024-12-06 13:23:15.388268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.987 [2024-12-06 13:23:15.388725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.987 [2024-12-06 13:23:15.388751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:08.987 [2024-12-06 13:23:15.388773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:29:08.987 [2024-12-06 13:23:15.388784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.987 [2024-12-06 13:23:15.444540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.987 [2024-12-06 13:23:15.444613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:08.987 [2024-12-06 13:23:15.444636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.987 [2024-12-06 13:23:15.444648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.987 [2024-12-06 13:23:15.444738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.987 [2024-12-06 13:23:15.444754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:08.987 [2024-12-06 13:23:15.444773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.987 [2024-12-06 13:23:15.444784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.987 [2024-12-06 13:23:15.444962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.987 [2024-12-06 13:23:15.444985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:08.987 [2024-12-06 13:23:15.445003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.987 [2024-12-06 13:23:15.445018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.987 [2024-12-06 13:23:15.445051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.987 [2024-12-06 13:23:15.445066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:08.987 [2024-12-06 13:23:15.445080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.987 [2024-12-06 13:23:15.445094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.244 [2024-12-06 13:23:15.549983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.244 [2024-12-06 13:23:15.550054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:09.244 [2024-12-06 13:23:15.550077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.244 [2024-12-06 13:23:15.550090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.244 [2024-12-06 13:23:15.635665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.244 [2024-12-06 13:23:15.635939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:09.244 [2024-12-06 13:23:15.635978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.244 [2024-12-06 13:23:15.635997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.244 [2024-12-06 13:23:15.636152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.244 [2024-12-06 13:23:15.636171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:09.244 [2024-12-06 13:23:15.636186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.244 [2024-12-06 13:23:15.636198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.245 [2024-12-06 13:23:15.636273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.245 [2024-12-06 13:23:15.636292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:09.245 [2024-12-06 13:23:15.636307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.245 [2024-12-06 13:23:15.636318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.245 [2024-12-06 13:23:15.636461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.245 [2024-12-06 13:23:15.636482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:09.245 [2024-12-06 13:23:15.636498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.245 [2024-12-06 13:23:15.636510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.245 [2024-12-06 13:23:15.636575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.245 [2024-12-06 13:23:15.636593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:09.245 [2024-12-06 13:23:15.636608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.245 [2024-12-06 13:23:15.636619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.245 [2024-12-06 13:23:15.636672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.245 [2024-12-06 13:23:15.636688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:09.245 [2024-12-06 13:23:15.636702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.245 [2024-12-06 13:23:15.636713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.245 [2024-12-06 13:23:15.636773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:09.245 [2024-12-06 13:23:15.636791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:09.245 [2024-12-06 13:23:15.636805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:09.245 [2024-12-06 13:23:15.636816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.245 [2024-12-06 13:23:15.637000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 466.992 ms, result 0 00:29:09.245 true 00:29:09.245 13:23:15 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79459 00:29:09.245 13:23:15 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79459 ']' 00:29:09.245 13:23:15 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79459 00:29:09.245 13:23:15 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:29:09.245 13:23:15 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.245 13:23:15 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79459 00:29:09.245 13:23:15 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:09.245 13:23:15 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:09.245 13:23:15 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79459' 00:29:09.245 killing process with pid 79459 00:29:09.245 13:23:15 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79459 00:29:09.245 13:23:15 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79459 00:29:13.541 13:23:19 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:18.806 262144+0 records in 00:29:18.806 262144+0 records out 00:29:18.806 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.12136 s, 210 MB/s 00:29:18.806 13:23:24 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:20.204 13:23:26 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:20.463 [2024-12-06 13:23:26.773661] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:29:20.463 [2024-12-06 13:23:26.774108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79707 ] 00:29:20.463 [2024-12-06 13:23:26.954385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.721 [2024-12-06 13:23:27.063245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.981 [2024-12-06 13:23:27.397447] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:20.981 [2024-12-06 13:23:27.397739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:21.240 [2024-12-06 13:23:27.568762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.240 [2024-12-06 13:23:27.569141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:21.240 [2024-12-06 13:23:27.569177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:21.240 [2024-12-06 13:23:27.569190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.240 [2024-12-06 13:23:27.569296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.240 [2024-12-06 13:23:27.569323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:21.240 [2024-12-06 13:23:27.569337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:21.240 [2024-12-06 13:23:27.569349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.240 [2024-12-06 13:23:27.569394] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:21.241 [2024-12-06 13:23:27.570422] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:21.241 [2024-12-06 13:23:27.570467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.241 [2024-12-06 13:23:27.570483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:21.241 [2024-12-06 13:23:27.570496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:29:21.241 [2024-12-06 13:23:27.570508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.241 [2024-12-06 13:23:27.571794] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:21.241 [2024-12-06 13:23:27.589335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.241 [2024-12-06 13:23:27.589431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:21.241 [2024-12-06 13:23:27.589453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.537 ms 00:29:21.241 [2024-12-06 13:23:27.589466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.241 [2024-12-06 13:23:27.589627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.241 [2024-12-06 13:23:27.589649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:21.241 [2024-12-06 13:23:27.589662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:21.241 [2024-12-06 13:23:27.589674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.241 [2024-12-06 13:23:27.594691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.241 [2024-12-06 13:23:27.594762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:21.241 [2024-12-06 13:23:27.594780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:29:21.241 [2024-12-06 13:23:27.594811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.241 [2024-12-06 13:23:27.594977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.241 [2024-12-06 13:23:27.595002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:21.241 [2024-12-06 13:23:27.595016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:21.241 [2024-12-06 13:23:27.595027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.241 [2024-12-06 13:23:27.595109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.241 [2024-12-06 13:23:27.595128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:21.241 [2024-12-06 13:23:27.595142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:21.241 [2024-12-06 13:23:27.595153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.241 [2024-12-06 13:23:27.595203] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:21.241 [2024-12-06 13:23:27.599611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.241 [2024-12-06 13:23:27.599659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:21.241 [2024-12-06 13:23:27.599690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.416 ms 00:29:21.241 [2024-12-06 13:23:27.599702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.241 [2024-12-06 13:23:27.599768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.241 [2024-12-06 13:23:27.599788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:21.241 [2024-12-06 13:23:27.599801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:21.241 [2024-12-06 13:23:27.599811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.241 [2024-12-06 13:23:27.599931] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:21.241 [2024-12-06 13:23:27.599981] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:21.241 [2024-12-06 13:23:27.600028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:21.241 [2024-12-06 13:23:27.600059] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:21.241 [2024-12-06 13:23:27.600175] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:21.241 [2024-12-06 13:23:27.600190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:21.241 [2024-12-06 13:23:27.600205] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:21.241 [2024-12-06 13:23:27.600221] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:21.241 [2024-12-06 13:23:27.600234] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:21.241 [2024-12-06 13:23:27.600246] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:21.241 [2024-12-06 13:23:27.600257] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:21.241 [2024-12-06 13:23:27.600279] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:21.241 [2024-12-06 13:23:27.600289] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:21.241 [2024-12-06 13:23:27.600302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.241 [2024-12-06 13:23:27.600314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:21.241 [2024-12-06 13:23:27.600326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:29:21.241 [2024-12-06 13:23:27.600338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.241 [2024-12-06 13:23:27.600439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.241 [2024-12-06 13:23:27.600456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:21.241 [2024-12-06 13:23:27.600468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:21.241 [2024-12-06 13:23:27.600479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.241 [2024-12-06 13:23:27.600614] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:21.241 [2024-12-06 13:23:27.600637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:21.241 [2024-12-06 13:23:27.600650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:21.241 [2024-12-06 13:23:27.600661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:21.241 [2024-12-06 13:23:27.600673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:21.241 [2024-12-06 13:23:27.600683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:21.241 [2024-12-06 13:23:27.600694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:21.241 [2024-12-06 13:23:27.600704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:21.241 [2024-12-06 13:23:27.600715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:21.241 [2024-12-06 13:23:27.600725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:21.241 [2024-12-06 13:23:27.600736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:21.241 [2024-12-06 13:23:27.600746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:21.241 [2024-12-06 13:23:27.600756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:21.241 [2024-12-06 13:23:27.600787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:21.241 [2024-12-06 13:23:27.600798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:21.241 [2024-12-06 13:23:27.600809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:21.241 [2024-12-06 13:23:27.600819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:21.241 [2024-12-06 13:23:27.600829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:21.241 [2024-12-06 13:23:27.600853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:21.241 [2024-12-06 13:23:27.600868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:21.241 [2024-12-06 13:23:27.600878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:21.241 [2024-12-06 13:23:27.600888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:21.241 [2024-12-06 13:23:27.600899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:21.241 [2024-12-06 13:23:27.600912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:21.241 [2024-12-06 13:23:27.600922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:21.241 [2024-12-06 13:23:27.600933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:21.241 [2024-12-06 13:23:27.600943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:21.241 [2024-12-06 13:23:27.600953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:21.241 [2024-12-06 13:23:27.600963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:21.241 [2024-12-06 13:23:27.600973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:21.241 [2024-12-06 13:23:27.600983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:21.241 [2024-12-06 13:23:27.600993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:21.241 [2024-12-06 13:23:27.601004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:21.241 [2024-12-06 13:23:27.601014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:21.241 [2024-12-06 13:23:27.601024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:21.241 [2024-12-06 13:23:27.601034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:21.241 [2024-12-06 13:23:27.601044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:21.241 [2024-12-06 13:23:27.601054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:21.241 [2024-12-06 13:23:27.601065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:21.241 [2024-12-06 13:23:27.601074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:21.241 [2024-12-06 13:23:27.601084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:21.241 [2024-12-06 13:23:27.601095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:21.241 [2024-12-06 13:23:27.601105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:21.241 [2024-12-06 13:23:27.601115] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:21.241 [2024-12-06 13:23:27.601126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:21.241 [2024-12-06 13:23:27.601137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:21.242 [2024-12-06 13:23:27.601148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:21.242 [2024-12-06 13:23:27.601159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:21.242 [2024-12-06 13:23:27.601170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:21.242 [2024-12-06 13:23:27.601180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:21.242 [2024-12-06 13:23:27.601190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:21.242 [2024-12-06 13:23:27.601200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:21.242 [2024-12-06 13:23:27.601211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:21.242 [2024-12-06 13:23:27.601223] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:21.242 [2024-12-06 13:23:27.601237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:21.242 [2024-12-06 13:23:27.601262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:21.242 [2024-12-06 13:23:27.601274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:21.242 [2024-12-06 13:23:27.601285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:21.242 [2024-12-06 13:23:27.601297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:21.242 [2024-12-06 13:23:27.601308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:21.242 [2024-12-06 13:23:27.601319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:21.242 [2024-12-06 13:23:27.601330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:21.242 [2024-12-06 13:23:27.601341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:21.242 [2024-12-06 13:23:27.601352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:21.242 [2024-12-06 13:23:27.601363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:21.242 [2024-12-06 13:23:27.601374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:21.242 [2024-12-06 13:23:27.601385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:21.242 [2024-12-06 13:23:27.601396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:21.242 [2024-12-06 13:23:27.601408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:21.242 [2024-12-06 13:23:27.601419] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:21.242 [2024-12-06 13:23:27.601431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:21.242 [2024-12-06 13:23:27.601444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:21.242 [2024-12-06 13:23:27.601455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:21.242 [2024-12-06 13:23:27.601466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:21.242 [2024-12-06 13:23:27.601477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:21.242 [2024-12-06 13:23:27.601490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.242 [2024-12-06 13:23:27.601501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:21.242 [2024-12-06 13:23:27.601513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:29:21.242 [2024-12-06 13:23:27.601524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.242 [2024-12-06 13:23:27.637189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.242 [2024-12-06 13:23:27.637275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:21.242 [2024-12-06 13:23:27.637300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.590 ms 00:29:21.242 [2024-12-06 13:23:27.637327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.242 [2024-12-06 13:23:27.637450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.242 [2024-12-06 13:23:27.637467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:21.242 [2024-12-06 13:23:27.637481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:21.242 [2024-12-06 13:23:27.637492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.242 [2024-12-06 13:23:27.693176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.242 [2024-12-06 13:23:27.693558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:21.242 [2024-12-06 13:23:27.693595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.562 ms 00:29:21.242 [2024-12-06 13:23:27.693611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.242 [2024-12-06 13:23:27.693711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.242 [2024-12-06 13:23:27.693730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:21.242 [2024-12-06 13:23:27.693765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:21.242 [2024-12-06 13:23:27.693776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.242 [2024-12-06 13:23:27.694302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.242 [2024-12-06 13:23:27.694331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:21.242 [2024-12-06 13:23:27.694346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:29:21.242 [2024-12-06 13:23:27.694358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.242 [2024-12-06 13:23:27.694532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.242 [2024-12-06 13:23:27.694559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:21.242 [2024-12-06 13:23:27.694588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:29:21.242 [2024-12-06 13:23:27.694599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.242 [2024-12-06 13:23:27.712773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.242 [2024-12-06 13:23:27.713166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:21.242 [2024-12-06 13:23:27.713203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.140 ms 00:29:21.242 [2024-12-06 13:23:27.713217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.242 [2024-12-06 13:23:27.730888] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:21.242 [2024-12-06 13:23:27.731003] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:21.242 [2024-12-06 13:23:27.731028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.242 [2024-12-06 13:23:27.731041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:21.242 [2024-12-06 13:23:27.731058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.587 ms 00:29:21.242 [2024-12-06 13:23:27.731070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.242 [2024-12-06 13:23:27.762190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.242 [2024-12-06 13:23:27.762349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:21.242 [2024-12-06 13:23:27.762375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.996 ms 00:29:21.242 [2024-12-06 13:23:27.762387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.779701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.779812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:21.502 [2024-12-06 13:23:27.779834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.172 ms 00:29:21.502 [2024-12-06 13:23:27.779880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.796772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.796904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:21.502 [2024-12-06 13:23:27.796929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.767 ms 00:29:21.502 [2024-12-06 13:23:27.796941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.797978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.798021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:21.502 [2024-12-06 13:23:27.798040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:29:21.502 [2024-12-06 13:23:27.798065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.876964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.877249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:21.502 [2024-12-06 13:23:27.877283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.869 ms 00:29:21.502 [2024-12-06 13:23:27.877309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.890404] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:21.502 [2024-12-06 13:23:27.893118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.893157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:21.502 [2024-12-06 13:23:27.893176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.729 ms 00:29:21.502 [2024-12-06 13:23:27.893187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.893315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.893337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:21.502 [2024-12-06 13:23:27.893351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:21.502 [2024-12-06 13:23:27.893362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.893464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.893485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:21.502 [2024-12-06 13:23:27.893498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:21.502 [2024-12-06 13:23:27.893509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.893542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.893558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:21.502 [2024-12-06 13:23:27.893570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:21.502 [2024-12-06 13:23:27.893581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.893625] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:21.502 [2024-12-06 13:23:27.893647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.893659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:21.502 [2024-12-06 13:23:27.893670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:21.502 [2024-12-06 13:23:27.893681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.927138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.927210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:21.502 [2024-12-06 13:23:27.927235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.429 ms 00:29:21.502 [2024-12-06 13:23:27.927276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.927378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.502 [2024-12-06 13:23:27.927399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:21.502 [2024-12-06 13:23:27.927412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:21.502 [2024-12-06 13:23:27.927423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.502 [2024-12-06 13:23:27.928868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 359.482 ms, result 0 00:29:22.438  [2024-12-06T13:23:30.344Z] Copying: 27/1024 [MB] (27 MBps) [2024-12-06T13:23:31.279Z] Copying: 55/1024 [MB] (27 MBps) [2024-12-06T13:23:32.214Z] Copying: 83/1024 [MB] (27 MBps) [2024-12-06T13:23:33.148Z] Copying: 110/1024 [MB] (27 MBps) [2024-12-06T13:23:34.083Z] Copying: 139/1024 [MB] (28 MBps) [2024-12-06T13:23:35.015Z] Copying: 168/1024 [MB] (29 MBps) [2024-12-06T13:23:35.947Z] Copying: 198/1024 [MB] (30 MBps) [2024-12-06T13:23:37.321Z] Copying: 227/1024 [MB] (29 MBps) [2024-12-06T13:23:38.256Z] Copying: 258/1024 [MB] (30 MBps) [2024-12-06T13:23:39.187Z] Copying: 288/1024 [MB] (30 MBps) [2024-12-06T13:23:40.161Z] Copying: 318/1024 [MB] (29 MBps) [2024-12-06T13:23:41.114Z] Copying: 346/1024 [MB] (28 MBps) [2024-12-06T13:23:42.051Z] Copying: 373/1024 [MB] (26 MBps) [2024-12-06T13:23:42.986Z] Copying: 401/1024 [MB] (28 MBps) [2024-12-06T13:23:44.363Z] Copying: 428/1024 [MB] (27 MBps) [2024-12-06T13:23:45.304Z] Copying: 456/1024 [MB] (27 MBps) [2024-12-06T13:23:45.978Z] Copying: 483/1024 [MB] (27 MBps) [2024-12-06T13:23:47.354Z] Copying: 510/1024 [MB] (27 MBps) [2024-12-06T13:23:48.290Z] Copying: 539/1024 [MB] (28 MBps) [2024-12-06T13:23:49.226Z] Copying: 565/1024 [MB] (26 MBps) [2024-12-06T13:23:50.161Z] Copying: 594/1024 [MB] (28 MBps) [2024-12-06T13:23:51.092Z] Copying: 623/1024 [MB] (29 MBps) [2024-12-06T13:23:52.024Z] Copying: 650/1024 [MB] (26 MBps) [2024-12-06T13:23:52.957Z] Copying: 680/1024 [MB] (30 MBps) [2024-12-06T13:23:54.420Z] Copying: 710/1024 [MB] (30 MBps) [2024-12-06T13:23:55.008Z] Copying: 740/1024 [MB] (29 MBps) [2024-12-06T13:23:56.380Z] Copying: 769/1024 [MB] (29 MBps) [2024-12-06T13:23:56.948Z] Copying: 797/1024 [MB] (28 MBps) [2024-12-06T13:23:58.324Z] Copying: 825/1024 [MB] (28 MBps) [2024-12-06T13:23:59.259Z] Copying: 852/1024 [MB] (27 MBps) [2024-12-06T13:24:00.194Z] Copying: 879/1024 [MB] (26 MBps) [2024-12-06T13:24:01.127Z] Copying: 908/1024 [MB] (28 MBps) [2024-12-06T13:24:02.059Z] Copying: 936/1024 [MB] (28 MBps) [2024-12-06T13:24:02.995Z] Copying: 967/1024 [MB] (30 MBps) [2024-12-06T13:24:03.929Z] Copying: 996/1024 [MB] (28 MBps) [2024-12-06T13:24:03.929Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-12-06 13:24:03.888392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.401 [2024-12-06 13:24:03.888507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:57.401 [2024-12-06 13:24:03.888531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:57.401 [2024-12-06 13:24:03.888544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.401 [2024-12-06 13:24:03.888581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:57.401 [2024-12-06 13:24:03.891931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.401 [2024-12-06 13:24:03.891971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:57.401 [2024-12-06 13:24:03.892002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.325 ms 00:29:57.401 [2024-12-06 13:24:03.892014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.401 [2024-12-06 13:24:03.893423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.401 [2024-12-06 13:24:03.893467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:57.401 [2024-12-06 13:24:03.893485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:29:57.401 [2024-12-06 13:24:03.893497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.401 [2024-12-06 13:24:03.910106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.401 [2024-12-06 13:24:03.910293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:57.401 [2024-12-06 13:24:03.910324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.586 ms 00:29:57.401 [2024-12-06 13:24:03.910339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.401 [2024-12-06 13:24:03.917119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.401 [2024-12-06 13:24:03.917275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:57.401 [2024-12-06 13:24:03.917302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.718 ms 00:29:57.401 [2024-12-06 13:24:03.917314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.661 [2024-12-06 13:24:03.948432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.661 [2024-12-06 13:24:03.948603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:57.661 [2024-12-06 13:24:03.948631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.033 ms 00:29:57.661 [2024-12-06 13:24:03.948644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.661 [2024-12-06 13:24:03.966287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.661 [2024-12-06 13:24:03.966350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:57.661 [2024-12-06 13:24:03.966370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.576 ms 00:29:57.661 [2024-12-06 13:24:03.966382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.661 [2024-12-06 13:24:03.966551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.661 [2024-12-06 13:24:03.966582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:57.661 [2024-12-06 13:24:03.966597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:29:57.661 [2024-12-06 13:24:03.966608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.661 [2024-12-06 13:24:03.998346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.661 [2024-12-06 13:24:03.998401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:57.661 [2024-12-06 13:24:03.998420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.715 ms 00:29:57.661 [2024-12-06 13:24:03.998431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.661 [2024-12-06 13:24:04.029680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.661 [2024-12-06 13:24:04.029732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:57.661 [2024-12-06 13:24:04.029751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.198 ms 00:29:57.661 [2024-12-06 13:24:04.029763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.661 [2024-12-06 13:24:04.060579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.661 [2024-12-06 13:24:04.060628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:57.661 [2024-12-06 13:24:04.060647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.767 ms 00:29:57.661 [2024-12-06 13:24:04.060658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.661 [2024-12-06 13:24:04.091459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.661 [2024-12-06 13:24:04.091641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:57.661 [2024-12-06 13:24:04.091671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.689 ms 00:29:57.661 [2024-12-06 13:24:04.091684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.661 [2024-12-06 13:24:04.091730] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:57.661 [2024-12-06 13:24:04.091753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.091989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:57.661 [2024-12-06 13:24:04.092253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.092997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.093009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:57.662 [2024-12-06 13:24:04.093031] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:57.662 [2024-12-06 13:24:04.093055] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d953d101-c147-4f86-bca3-652dd3007b5e 00:29:57.662 [2024-12-06 13:24:04.093067] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:57.662 [2024-12-06 13:24:04.093078] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:57.662 [2024-12-06 13:24:04.093094] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:57.662 [2024-12-06 13:24:04.093105] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:57.662 [2024-12-06 13:24:04.093116] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:57.662 [2024-12-06 13:24:04.093145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:57.662 [2024-12-06 13:24:04.093157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:57.662 [2024-12-06 13:24:04.093167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:57.662 [2024-12-06 13:24:04.093177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:57.662 [2024-12-06 13:24:04.093188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.662 [2024-12-06 13:24:04.093200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:57.662 [2024-12-06 13:24:04.093212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:29:57.662 [2024-12-06 13:24:04.093223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.662 [2024-12-06 13:24:04.109711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.662 [2024-12-06 13:24:04.109754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:57.662 [2024-12-06 13:24:04.109772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.434 ms 00:29:57.662 [2024-12-06 13:24:04.109784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.662 [2024-12-06 13:24:04.110244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.662 [2024-12-06 13:24:04.110391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:57.662 [2024-12-06 13:24:04.110417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:29:57.662 [2024-12-06 13:24:04.110446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.662 [2024-12-06 13:24:04.153339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.662 [2024-12-06 13:24:04.153539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:57.662 [2024-12-06 13:24:04.153568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.662 [2024-12-06 13:24:04.153581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.662 [2024-12-06 13:24:04.153661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.662 [2024-12-06 13:24:04.153678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:57.663 [2024-12-06 13:24:04.153690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.663 [2024-12-06 13:24:04.153708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.663 [2024-12-06 13:24:04.153822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.663 [2024-12-06 13:24:04.153866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:57.663 [2024-12-06 13:24:04.153882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.663 [2024-12-06 13:24:04.153893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.663 [2024-12-06 13:24:04.153916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.663 [2024-12-06 13:24:04.153930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:57.663 [2024-12-06 13:24:04.153941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.663 [2024-12-06 13:24:04.153952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.921 [2024-12-06 13:24:04.257640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.921 [2024-12-06 13:24:04.257712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:57.921 [2024-12-06 13:24:04.257732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.921 [2024-12-06 13:24:04.257744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.921 [2024-12-06 13:24:04.341761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.921 [2024-12-06 13:24:04.341830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:57.921 [2024-12-06 13:24:04.341872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.921 [2024-12-06 13:24:04.341892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.921 [2024-12-06 13:24:04.341994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.921 [2024-12-06 13:24:04.342013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.921 [2024-12-06 13:24:04.342025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.921 [2024-12-06 13:24:04.342037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.921 [2024-12-06 13:24:04.342085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.921 [2024-12-06 13:24:04.342102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.921 [2024-12-06 13:24:04.342114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.921 [2024-12-06 13:24:04.342125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.921 [2024-12-06 13:24:04.342256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.921 [2024-12-06 13:24:04.342277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.921 [2024-12-06 13:24:04.342290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.921 [2024-12-06 13:24:04.342301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.921 [2024-12-06 13:24:04.342351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.921 [2024-12-06 13:24:04.342368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:57.921 [2024-12-06 13:24:04.342380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.921 [2024-12-06 13:24:04.342390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.921 [2024-12-06 13:24:04.342442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.921 [2024-12-06 13:24:04.342465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.921 [2024-12-06 13:24:04.342477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.921 [2024-12-06 13:24:04.342488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.921 [2024-12-06 13:24:04.342539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.921 [2024-12-06 13:24:04.342563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.921 [2024-12-06 13:24:04.342576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.921 [2024-12-06 13:24:04.342587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.921 [2024-12-06 13:24:04.342729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 454.303 ms, result 0 00:29:59.343 00:29:59.343 00:29:59.343 13:24:05 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:59.343 [2024-12-06 13:24:05.582195] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:29:59.343 [2024-12-06 13:24:05.582347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80088 ] 00:29:59.343 [2024-12-06 13:24:05.761043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.343 [2024-12-06 13:24:05.865609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.910 [2024-12-06 13:24:06.192656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.910 [2024-12-06 13:24:06.192972] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.910 [2024-12-06 13:24:06.353364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.910 [2024-12-06 13:24:06.353442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:59.910 [2024-12-06 13:24:06.353465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:59.910 [2024-12-06 13:24:06.353478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.910 [2024-12-06 13:24:06.353547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.910 [2024-12-06 13:24:06.353569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:59.910 [2024-12-06 13:24:06.353582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:59.910 [2024-12-06 13:24:06.353594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.910 [2024-12-06 13:24:06.353627] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:59.910 [2024-12-06 13:24:06.354818] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:59.910 [2024-12-06 13:24:06.355034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.910 [2024-12-06 13:24:06.355161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:59.910 [2024-12-06 13:24:06.355302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:29:59.910 [2024-12-06 13:24:06.355328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.910 [2024-12-06 13:24:06.356514] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:59.910 [2024-12-06 13:24:06.373062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.910 [2024-12-06 13:24:06.373111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:59.910 [2024-12-06 13:24:06.373131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.549 ms 00:29:59.910 [2024-12-06 13:24:06.373143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.910 [2024-12-06 13:24:06.373236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.910 [2024-12-06 13:24:06.373257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:59.910 [2024-12-06 13:24:06.373269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:59.910 [2024-12-06 13:24:06.373280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.910 [2024-12-06 13:24:06.377763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.910 [2024-12-06 13:24:06.377816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:59.910 [2024-12-06 13:24:06.377833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.387 ms 00:29:59.910 [2024-12-06 13:24:06.377874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.910 [2024-12-06 13:24:06.377974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.910 [2024-12-06 13:24:06.377993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:59.910 [2024-12-06 13:24:06.378006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:59.910 [2024-12-06 13:24:06.378018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.910 [2024-12-06 13:24:06.378085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.910 [2024-12-06 13:24:06.378104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:59.910 [2024-12-06 13:24:06.378116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:59.910 [2024-12-06 13:24:06.378128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.910 [2024-12-06 13:24:06.378169] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:59.910 [2024-12-06 13:24:06.382444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.910 [2024-12-06 13:24:06.382486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:59.910 [2024-12-06 13:24:06.382508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.284 ms 00:29:59.910 [2024-12-06 13:24:06.382519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.910 [2024-12-06 13:24:06.382561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.910 [2024-12-06 13:24:06.382578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:59.910 [2024-12-06 13:24:06.382590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:59.910 [2024-12-06 13:24:06.382601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.910 [2024-12-06 13:24:06.382650] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:59.910 [2024-12-06 13:24:06.382692] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:59.910 [2024-12-06 13:24:06.382736] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:59.911 [2024-12-06 13:24:06.382760] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:59.911 [2024-12-06 13:24:06.382894] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:59.911 [2024-12-06 13:24:06.382914] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:59.911 [2024-12-06 13:24:06.382929] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:59.911 [2024-12-06 13:24:06.382944] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:59.911 [2024-12-06 13:24:06.382958] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:59.911 [2024-12-06 13:24:06.382969] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:59.911 [2024-12-06 13:24:06.382980] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:59.911 [2024-12-06 13:24:06.382996] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:59.911 [2024-12-06 13:24:06.383007] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:59.911 [2024-12-06 13:24:06.383019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.911 [2024-12-06 13:24:06.383030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:59.911 [2024-12-06 13:24:06.383042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:29:59.911 [2024-12-06 13:24:06.383053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.911 [2024-12-06 13:24:06.383149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.911 [2024-12-06 13:24:06.383164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:59.911 [2024-12-06 13:24:06.383176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:59.911 [2024-12-06 13:24:06.383186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.911 [2024-12-06 13:24:06.383339] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:59.911 [2024-12-06 13:24:06.383362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:59.911 [2024-12-06 13:24:06.383375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.911 [2024-12-06 13:24:06.383386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:59.911 [2024-12-06 13:24:06.383408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:59.911 [2024-12-06 13:24:06.383430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:59.911 [2024-12-06 13:24:06.383441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.911 [2024-12-06 13:24:06.383461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:59.911 [2024-12-06 13:24:06.383472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:59.911 [2024-12-06 13:24:06.383482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.911 [2024-12-06 13:24:06.383505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:59.911 [2024-12-06 13:24:06.383516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:59.911 [2024-12-06 13:24:06.383527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:59.911 [2024-12-06 13:24:06.383560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:59.911 [2024-12-06 13:24:06.383570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:59.911 [2024-12-06 13:24:06.383591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.911 [2024-12-06 13:24:06.383613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:59.911 [2024-12-06 13:24:06.383624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.911 [2024-12-06 13:24:06.383644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:59.911 [2024-12-06 13:24:06.383654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.911 [2024-12-06 13:24:06.383674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:59.911 [2024-12-06 13:24:06.383684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.911 [2024-12-06 13:24:06.383704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:59.911 [2024-12-06 13:24:06.383714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.911 [2024-12-06 13:24:06.383734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:59.911 [2024-12-06 13:24:06.383744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:59.911 [2024-12-06 13:24:06.383761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.911 [2024-12-06 13:24:06.383771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:59.911 [2024-12-06 13:24:06.383782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:59.911 [2024-12-06 13:24:06.383791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:59.911 [2024-12-06 13:24:06.383811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:59.911 [2024-12-06 13:24:06.383822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.911 [2024-12-06 13:24:06.383833] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:59.911 [2024-12-06 13:24:06.384145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:59.911 [2024-12-06 13:24:06.384203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.911 [2024-12-06 13:24:06.384246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.911 [2024-12-06 13:24:06.384284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:59.911 [2024-12-06 13:24:06.384402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:59.911 [2024-12-06 13:24:06.384454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:59.911 [2024-12-06 13:24:06.384494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:59.911 [2024-12-06 13:24:06.384531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:59.911 [2024-12-06 13:24:06.384669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:59.911 [2024-12-06 13:24:06.384712] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:59.911 [2024-12-06 13:24:06.384868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.911 [2024-12-06 13:24:06.385109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:59.911 [2024-12-06 13:24:06.385170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:59.911 [2024-12-06 13:24:06.385226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:59.911 [2024-12-06 13:24:06.385416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:59.911 [2024-12-06 13:24:06.385476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:59.911 [2024-12-06 13:24:06.385597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:59.911 [2024-12-06 13:24:06.385664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:59.911 [2024-12-06 13:24:06.385719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:59.911 [2024-12-06 13:24:06.385873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:59.911 [2024-12-06 13:24:06.385893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:59.911 [2024-12-06 13:24:06.385905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:59.911 [2024-12-06 13:24:06.385916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:59.911 [2024-12-06 13:24:06.385926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:59.911 [2024-12-06 13:24:06.385939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:59.911 [2024-12-06 13:24:06.385950] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:59.911 [2024-12-06 13:24:06.385963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.912 [2024-12-06 13:24:06.385976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:59.912 [2024-12-06 13:24:06.385987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:59.912 [2024-12-06 13:24:06.385998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:59.912 [2024-12-06 13:24:06.386009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:59.912 [2024-12-06 13:24:06.386023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.912 [2024-12-06 13:24:06.386036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:59.912 [2024-12-06 13:24:06.386048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.756 ms 00:29:59.912 [2024-12-06 13:24:06.386060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.912 [2024-12-06 13:24:06.419327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.912 [2024-12-06 13:24:06.419396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:59.912 [2024-12-06 13:24:06.419417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.187 ms 00:29:59.912 [2024-12-06 13:24:06.419435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.912 [2024-12-06 13:24:06.419562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.912 [2024-12-06 13:24:06.419580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:59.912 [2024-12-06 13:24:06.419594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:29:59.912 [2024-12-06 13:24:06.419606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.170 [2024-12-06 13:24:06.471833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.170 [2024-12-06 13:24:06.471910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:00.170 [2024-12-06 13:24:06.471932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.127 ms 00:30:00.170 [2024-12-06 13:24:06.471944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.170 [2024-12-06 13:24:06.472025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.170 [2024-12-06 13:24:06.472042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:00.170 [2024-12-06 13:24:06.472062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:00.170 [2024-12-06 13:24:06.472073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.170 [2024-12-06 13:24:06.472476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.170 [2024-12-06 13:24:06.472496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:00.170 [2024-12-06 13:24:06.472510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:30:00.170 [2024-12-06 13:24:06.472521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.170 [2024-12-06 13:24:06.472680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.170 [2024-12-06 13:24:06.472700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:00.170 [2024-12-06 13:24:06.472720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:30:00.170 [2024-12-06 13:24:06.472731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.170 [2024-12-06 13:24:06.489379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.170 [2024-12-06 13:24:06.489436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:00.170 [2024-12-06 13:24:06.489456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.618 ms 00:30:00.170 [2024-12-06 13:24:06.489468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.170 [2024-12-06 13:24:06.505986] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:00.170 [2024-12-06 13:24:06.506180] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:00.170 [2024-12-06 13:24:06.506207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.170 [2024-12-06 13:24:06.506221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:00.170 [2024-12-06 13:24:06.506233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.589 ms 00:30:00.170 [2024-12-06 13:24:06.506244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.170 [2024-12-06 13:24:06.536111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.170 [2024-12-06 13:24:06.536169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:00.170 [2024-12-06 13:24:06.536188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.815 ms 00:30:00.170 [2024-12-06 13:24:06.536200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.170 [2024-12-06 13:24:06.552261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.170 [2024-12-06 13:24:06.552311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:00.170 [2024-12-06 13:24:06.552330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.983 ms 00:30:00.170 [2024-12-06 13:24:06.552342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.170 [2024-12-06 13:24:06.567864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.170 [2024-12-06 13:24:06.567927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:00.170 [2024-12-06 13:24:06.567946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.473 ms 00:30:00.170 [2024-12-06 13:24:06.567957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.170 [2024-12-06 13:24:06.568761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.171 [2024-12-06 13:24:06.568801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:00.171 [2024-12-06 13:24:06.568823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:30:00.171 [2024-12-06 13:24:06.568834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.171 [2024-12-06 13:24:06.642152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.171 [2024-12-06 13:24:06.642232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:00.171 [2024-12-06 13:24:06.642262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.260 ms 00:30:00.171 [2024-12-06 13:24:06.642275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.171 [2024-12-06 13:24:06.655428] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:00.171 [2024-12-06 13:24:06.658100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.171 [2024-12-06 13:24:06.658142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:00.171 [2024-12-06 13:24:06.658161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.747 ms 00:30:00.171 [2024-12-06 13:24:06.658173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.171 [2024-12-06 13:24:06.658298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.171 [2024-12-06 13:24:06.658319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:00.171 [2024-12-06 13:24:06.658337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:00.171 [2024-12-06 13:24:06.658349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.171 [2024-12-06 13:24:06.658443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.171 [2024-12-06 13:24:06.658463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:00.171 [2024-12-06 13:24:06.658475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:00.171 [2024-12-06 13:24:06.658486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.171 [2024-12-06 13:24:06.658519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.171 [2024-12-06 13:24:06.658534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:00.171 [2024-12-06 13:24:06.658546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:00.171 [2024-12-06 13:24:06.658557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.171 [2024-12-06 13:24:06.658606] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:00.171 [2024-12-06 13:24:06.658622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.171 [2024-12-06 13:24:06.658634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:00.171 [2024-12-06 13:24:06.658647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:00.171 [2024-12-06 13:24:06.658657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.171 [2024-12-06 13:24:06.690229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.171 [2024-12-06 13:24:06.690285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:00.171 [2024-12-06 13:24:06.690312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.546 ms 00:30:00.171 [2024-12-06 13:24:06.690324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.171 [2024-12-06 13:24:06.690414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.171 [2024-12-06 13:24:06.690433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:00.171 [2024-12-06 13:24:06.690446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:00.171 [2024-12-06 13:24:06.690457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.171 [2024-12-06 13:24:06.691729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.840 ms, result 0 00:30:01.543  [2024-12-06T13:24:09.003Z] Copying: 28/1024 [MB] (28 MBps) [2024-12-06T13:24:09.935Z] Copying: 57/1024 [MB] (28 MBps) [2024-12-06T13:24:11.304Z] Copying: 83/1024 [MB] (26 MBps) [2024-12-06T13:24:12.237Z] Copying: 110/1024 [MB] (27 MBps) [2024-12-06T13:24:13.171Z] Copying: 136/1024 [MB] (25 MBps) [2024-12-06T13:24:14.106Z] Copying: 164/1024 [MB] (27 MBps) [2024-12-06T13:24:15.060Z] Copying: 189/1024 [MB] (25 MBps) [2024-12-06T13:24:16.039Z] Copying: 215/1024 [MB] (26 MBps) [2024-12-06T13:24:16.973Z] Copying: 243/1024 [MB] (27 MBps) [2024-12-06T13:24:18.346Z] Copying: 271/1024 [MB] (28 MBps) [2024-12-06T13:24:19.278Z] Copying: 296/1024 [MB] (24 MBps) [2024-12-06T13:24:20.212Z] Copying: 321/1024 [MB] (24 MBps) [2024-12-06T13:24:21.147Z] Copying: 347/1024 [MB] (25 MBps) [2024-12-06T13:24:22.083Z] Copying: 371/1024 [MB] (24 MBps) [2024-12-06T13:24:23.018Z] Copying: 397/1024 [MB] (25 MBps) [2024-12-06T13:24:23.952Z] Copying: 423/1024 [MB] (25 MBps) [2024-12-06T13:24:25.391Z] Copying: 449/1024 [MB] (25 MBps) [2024-12-06T13:24:25.971Z] Copying: 476/1024 [MB] (26 MBps) [2024-12-06T13:24:27.343Z] Copying: 499/1024 [MB] (23 MBps) [2024-12-06T13:24:28.275Z] Copying: 524/1024 [MB] (25 MBps) [2024-12-06T13:24:29.208Z] Copying: 553/1024 [MB] (29 MBps) [2024-12-06T13:24:30.140Z] Copying: 582/1024 [MB] (29 MBps) [2024-12-06T13:24:31.073Z] Copying: 612/1024 [MB] (29 MBps) [2024-12-06T13:24:32.116Z] Copying: 639/1024 [MB] (26 MBps) [2024-12-06T13:24:33.050Z] Copying: 669/1024 [MB] (29 MBps) [2024-12-06T13:24:33.984Z] Copying: 699/1024 [MB] (29 MBps) [2024-12-06T13:24:34.917Z] Copying: 726/1024 [MB] (27 MBps) [2024-12-06T13:24:36.292Z] Copying: 755/1024 [MB] (28 MBps) [2024-12-06T13:24:37.224Z] Copying: 781/1024 [MB] (25 MBps) [2024-12-06T13:24:38.158Z] Copying: 808/1024 [MB] (27 MBps) [2024-12-06T13:24:39.092Z] Copying: 836/1024 [MB] (28 MBps) [2024-12-06T13:24:40.028Z] Copying: 865/1024 [MB] (28 MBps) [2024-12-06T13:24:40.961Z] Copying: 894/1024 [MB] (28 MBps) [2024-12-06T13:24:41.912Z] Copying: 920/1024 [MB] (26 MBps) [2024-12-06T13:24:43.287Z] Copying: 949/1024 [MB] (28 MBps) [2024-12-06T13:24:44.222Z] Copying: 978/1024 [MB] (29 MBps) [2024-12-06T13:24:44.480Z] Copying: 1008/1024 [MB] (29 MBps) [2024-12-06T13:24:45.411Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-06 13:24:45.326569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.883 [2024-12-06 13:24:45.326661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:38.883 [2024-12-06 13:24:45.326688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:38.883 [2024-12-06 13:24:45.326704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.883 [2024-12-06 13:24:45.326742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:38.883 [2024-12-06 13:24:45.330825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.883 [2024-12-06 13:24:45.330887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:38.883 [2024-12-06 13:24:45.330906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.057 ms 00:30:38.883 [2024-12-06 13:24:45.330921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.883 [2024-12-06 13:24:45.331215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.883 [2024-12-06 13:24:45.331236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:38.883 [2024-12-06 13:24:45.331251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:30:38.883 [2024-12-06 13:24:45.331264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.883 [2024-12-06 13:24:45.335677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.883 [2024-12-06 13:24:45.335718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:38.883 [2024-12-06 13:24:45.335737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.389 ms 00:30:38.883 [2024-12-06 13:24:45.335759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.883 [2024-12-06 13:24:45.345317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.883 [2024-12-06 13:24:45.345365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:38.883 [2024-12-06 13:24:45.345384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.518 ms 00:30:38.883 [2024-12-06 13:24:45.345398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.883 [2024-12-06 13:24:45.387523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.883 [2024-12-06 13:24:45.387614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:38.883 [2024-12-06 13:24:45.387648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.022 ms 00:30:38.883 [2024-12-06 13:24:45.387663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.883 [2024-12-06 13:24:45.408564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.883 [2024-12-06 13:24:45.408623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:38.883 [2024-12-06 13:24:45.408646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.855 ms 00:30:38.883 [2024-12-06 13:24:45.408661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.883 [2024-12-06 13:24:45.408888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.883 [2024-12-06 13:24:45.408920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:38.883 [2024-12-06 13:24:45.408937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:30:38.883 [2024-12-06 13:24:45.408951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.142 [2024-12-06 13:24:45.446899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.142 [2024-12-06 13:24:45.447155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:39.142 [2024-12-06 13:24:45.447192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.922 ms 00:30:39.142 [2024-12-06 13:24:45.447207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.142 [2024-12-06 13:24:45.485565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.142 [2024-12-06 13:24:45.485666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:39.142 [2024-12-06 13:24:45.485691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.312 ms 00:30:39.142 [2024-12-06 13:24:45.485705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.142 [2024-12-06 13:24:45.530189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.142 [2024-12-06 13:24:45.530293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:39.142 [2024-12-06 13:24:45.530330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.415 ms 00:30:39.142 [2024-12-06 13:24:45.530363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.142 [2024-12-06 13:24:45.570584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.142 [2024-12-06 13:24:45.570654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:39.142 [2024-12-06 13:24:45.570678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.050 ms 00:30:39.142 [2024-12-06 13:24:45.570692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.142 [2024-12-06 13:24:45.570732] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:39.142 [2024-12-06 13:24:45.570766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.570998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:39.142 [2024-12-06 13:24:45.571359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.571995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:39.143 [2024-12-06 13:24:45.572237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:39.143 [2024-12-06 13:24:45.572252] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d953d101-c147-4f86-bca3-652dd3007b5e 00:30:39.143 [2024-12-06 13:24:45.572266] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:39.143 [2024-12-06 13:24:45.572279] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:39.143 [2024-12-06 13:24:45.572292] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:39.143 [2024-12-06 13:24:45.572306] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:39.143 [2024-12-06 13:24:45.572335] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:39.143 [2024-12-06 13:24:45.572349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:39.143 [2024-12-06 13:24:45.572362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:39.143 [2024-12-06 13:24:45.572374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:39.143 [2024-12-06 13:24:45.572385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:39.143 [2024-12-06 13:24:45.572399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.143 [2024-12-06 13:24:45.572413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:39.143 [2024-12-06 13:24:45.572427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:30:39.143 [2024-12-06 13:24:45.572445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.143 [2024-12-06 13:24:45.590199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.143 [2024-12-06 13:24:45.590246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:39.143 [2024-12-06 13:24:45.590264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.695 ms 00:30:39.143 [2024-12-06 13:24:45.590275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.143 [2024-12-06 13:24:45.590711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.143 [2024-12-06 13:24:45.590734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:39.144 [2024-12-06 13:24:45.590755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:30:39.144 [2024-12-06 13:24:45.590766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.144 [2024-12-06 13:24:45.634029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.144 [2024-12-06 13:24:45.634092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:39.144 [2024-12-06 13:24:45.634111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.144 [2024-12-06 13:24:45.634123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.144 [2024-12-06 13:24:45.634201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.144 [2024-12-06 13:24:45.634226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:39.144 [2024-12-06 13:24:45.634245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.144 [2024-12-06 13:24:45.634255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.144 [2024-12-06 13:24:45.634349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.144 [2024-12-06 13:24:45.634368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:39.144 [2024-12-06 13:24:45.634380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.144 [2024-12-06 13:24:45.634391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.144 [2024-12-06 13:24:45.634414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.144 [2024-12-06 13:24:45.634429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:39.144 [2024-12-06 13:24:45.634440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.144 [2024-12-06 13:24:45.634457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.401 [2024-12-06 13:24:45.738266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.401 [2024-12-06 13:24:45.738329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:39.401 [2024-12-06 13:24:45.738348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.401 [2024-12-06 13:24:45.738359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.401 [2024-12-06 13:24:45.823980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.401 [2024-12-06 13:24:45.824047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:39.401 [2024-12-06 13:24:45.824076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.401 [2024-12-06 13:24:45.824087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.401 [2024-12-06 13:24:45.824199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.401 [2024-12-06 13:24:45.824219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:39.401 [2024-12-06 13:24:45.824232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.401 [2024-12-06 13:24:45.824242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.401 [2024-12-06 13:24:45.824289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.401 [2024-12-06 13:24:45.824305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:39.401 [2024-12-06 13:24:45.824318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.401 [2024-12-06 13:24:45.824329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.401 [2024-12-06 13:24:45.824463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.401 [2024-12-06 13:24:45.824483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:39.401 [2024-12-06 13:24:45.824495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.401 [2024-12-06 13:24:45.824507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.401 [2024-12-06 13:24:45.824557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.401 [2024-12-06 13:24:45.824575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:39.401 [2024-12-06 13:24:45.824587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.401 [2024-12-06 13:24:45.824597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.401 [2024-12-06 13:24:45.824659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.401 [2024-12-06 13:24:45.824676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:39.401 [2024-12-06 13:24:45.824688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.401 [2024-12-06 13:24:45.824699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.401 [2024-12-06 13:24:45.824750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.401 [2024-12-06 13:24:45.824766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:39.401 [2024-12-06 13:24:45.824778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.401 [2024-12-06 13:24:45.824789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.401 [2024-12-06 13:24:45.824956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 498.361 ms, result 0 00:30:40.333 00:30:40.333 00:30:40.333 13:24:46 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:42.861 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:42.861 13:24:48 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:42.861 [2024-12-06 13:24:49.052175] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:30:42.861 [2024-12-06 13:24:49.052365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80522 ] 00:30:42.861 [2024-12-06 13:24:49.221257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.861 [2024-12-06 13:24:49.321502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.120 [2024-12-06 13:24:49.637585] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:43.120 [2024-12-06 13:24:49.637701] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:43.380 [2024-12-06 13:24:49.797379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.380 [2024-12-06 13:24:49.797468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:43.380 [2024-12-06 13:24:49.797506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:43.380 [2024-12-06 13:24:49.797518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.380 [2024-12-06 13:24:49.797586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.380 [2024-12-06 13:24:49.797608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:43.380 [2024-12-06 13:24:49.797621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:43.380 [2024-12-06 13:24:49.797633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.380 [2024-12-06 13:24:49.797674] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:43.380 [2024-12-06 13:24:49.798604] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:43.380 [2024-12-06 13:24:49.798646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.380 [2024-12-06 13:24:49.798660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:43.380 [2024-12-06 13:24:49.798673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:30:43.380 [2024-12-06 13:24:49.798700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.380 [2024-12-06 13:24:49.800050] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:43.380 [2024-12-06 13:24:49.815642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.380 [2024-12-06 13:24:49.815687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:43.380 [2024-12-06 13:24:49.815704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.593 ms 00:30:43.381 [2024-12-06 13:24:49.815717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.381 [2024-12-06 13:24:49.815799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.381 [2024-12-06 13:24:49.815818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:43.381 [2024-12-06 13:24:49.815831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:43.381 [2024-12-06 13:24:49.815855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.381 [2024-12-06 13:24:49.820199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.381 [2024-12-06 13:24:49.820255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:43.381 [2024-12-06 13:24:49.820286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.245 ms 00:30:43.381 [2024-12-06 13:24:49.820321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.381 [2024-12-06 13:24:49.820414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.381 [2024-12-06 13:24:49.820432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:43.381 [2024-12-06 13:24:49.820444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:30:43.381 [2024-12-06 13:24:49.820455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.381 [2024-12-06 13:24:49.820530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.381 [2024-12-06 13:24:49.820547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:43.381 [2024-12-06 13:24:49.820559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:43.381 [2024-12-06 13:24:49.820570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.381 [2024-12-06 13:24:49.820610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:43.381 [2024-12-06 13:24:49.824748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.381 [2024-12-06 13:24:49.824799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:43.381 [2024-12-06 13:24:49.824851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.146 ms 00:30:43.381 [2024-12-06 13:24:49.824863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.381 [2024-12-06 13:24:49.824929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.381 [2024-12-06 13:24:49.824955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:43.381 [2024-12-06 13:24:49.824967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:43.381 [2024-12-06 13:24:49.824977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.381 [2024-12-06 13:24:49.825024] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:43.381 [2024-12-06 13:24:49.825073] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:43.381 [2024-12-06 13:24:49.825117] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:43.381 [2024-12-06 13:24:49.825143] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:43.381 [2024-12-06 13:24:49.825264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:43.381 [2024-12-06 13:24:49.825280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:43.381 [2024-12-06 13:24:49.825294] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:43.381 [2024-12-06 13:24:49.825310] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:43.381 [2024-12-06 13:24:49.825323] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:43.381 [2024-12-06 13:24:49.825335] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:43.381 [2024-12-06 13:24:49.825345] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:43.381 [2024-12-06 13:24:49.825361] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:43.381 [2024-12-06 13:24:49.825372] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:43.381 [2024-12-06 13:24:49.825384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.381 [2024-12-06 13:24:49.825395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:43.381 [2024-12-06 13:24:49.825407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:30:43.381 [2024-12-06 13:24:49.825417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.381 [2024-12-06 13:24:49.825516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.381 [2024-12-06 13:24:49.825532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:43.381 [2024-12-06 13:24:49.825543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:43.381 [2024-12-06 13:24:49.825553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.381 [2024-12-06 13:24:49.825675] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:43.381 [2024-12-06 13:24:49.825705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:43.381 [2024-12-06 13:24:49.825720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:43.381 [2024-12-06 13:24:49.825731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.381 [2024-12-06 13:24:49.825743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:43.381 [2024-12-06 13:24:49.825753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:43.381 [2024-12-06 13:24:49.825763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:43.381 [2024-12-06 13:24:49.825774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:43.381 [2024-12-06 13:24:49.825784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:43.381 [2024-12-06 13:24:49.825794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:43.381 [2024-12-06 13:24:49.825804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:43.381 [2024-12-06 13:24:49.825814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:43.381 [2024-12-06 13:24:49.825824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:43.381 [2024-12-06 13:24:49.825939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:43.381 [2024-12-06 13:24:49.825955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:43.381 [2024-12-06 13:24:49.825966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.381 [2024-12-06 13:24:49.825977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:43.381 [2024-12-06 13:24:49.825989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:43.381 [2024-12-06 13:24:49.825999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.381 [2024-12-06 13:24:49.826010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:43.381 [2024-12-06 13:24:49.826020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:43.381 [2024-12-06 13:24:49.826029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:43.381 [2024-12-06 13:24:49.826039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:43.381 [2024-12-06 13:24:49.826050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:43.381 [2024-12-06 13:24:49.826060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:43.381 [2024-12-06 13:24:49.826070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:43.381 [2024-12-06 13:24:49.826079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:43.381 [2024-12-06 13:24:49.826089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:43.381 [2024-12-06 13:24:49.826099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:43.381 [2024-12-06 13:24:49.826109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:43.381 [2024-12-06 13:24:49.826119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:43.381 [2024-12-06 13:24:49.826129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:43.381 [2024-12-06 13:24:49.826139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:43.381 [2024-12-06 13:24:49.826149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:43.381 [2024-12-06 13:24:49.826159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:43.381 [2024-12-06 13:24:49.826169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:43.381 [2024-12-06 13:24:49.826179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:43.381 [2024-12-06 13:24:49.826189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:43.381 [2024-12-06 13:24:49.826199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:43.381 [2024-12-06 13:24:49.826208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.381 [2024-12-06 13:24:49.826218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:43.381 [2024-12-06 13:24:49.826228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:43.381 [2024-12-06 13:24:49.826238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.381 [2024-12-06 13:24:49.826248] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:43.381 [2024-12-06 13:24:49.826259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:43.381 [2024-12-06 13:24:49.826270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:43.381 [2024-12-06 13:24:49.826280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.381 [2024-12-06 13:24:49.826292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:43.381 [2024-12-06 13:24:49.826302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:43.381 [2024-12-06 13:24:49.826313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:43.381 [2024-12-06 13:24:49.826324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:43.381 [2024-12-06 13:24:49.826333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:43.381 [2024-12-06 13:24:49.826343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:43.381 [2024-12-06 13:24:49.826355] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:43.381 [2024-12-06 13:24:49.826370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:43.381 [2024-12-06 13:24:49.826401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:43.381 [2024-12-06 13:24:49.826413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:43.382 [2024-12-06 13:24:49.826424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:43.382 [2024-12-06 13:24:49.826435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:43.382 [2024-12-06 13:24:49.826445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:43.382 [2024-12-06 13:24:49.826457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:43.382 [2024-12-06 13:24:49.826467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:43.382 [2024-12-06 13:24:49.826478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:43.382 [2024-12-06 13:24:49.826489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:43.382 [2024-12-06 13:24:49.826500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:43.382 [2024-12-06 13:24:49.826511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:43.382 [2024-12-06 13:24:49.826521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:43.382 [2024-12-06 13:24:49.826532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:43.382 [2024-12-06 13:24:49.826543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:43.382 [2024-12-06 13:24:49.826554] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:43.382 [2024-12-06 13:24:49.826566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:43.382 [2024-12-06 13:24:49.826578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:43.382 [2024-12-06 13:24:49.826589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:43.382 [2024-12-06 13:24:49.826600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:43.382 [2024-12-06 13:24:49.826611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:43.382 [2024-12-06 13:24:49.826624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.382 [2024-12-06 13:24:49.826635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:43.382 [2024-12-06 13:24:49.826647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:30:43.382 [2024-12-06 13:24:49.826657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.382 [2024-12-06 13:24:49.861977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.382 [2024-12-06 13:24:49.862052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:43.382 [2024-12-06 13:24:49.862072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.227 ms 00:30:43.382 [2024-12-06 13:24:49.862091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.382 [2024-12-06 13:24:49.862218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.382 [2024-12-06 13:24:49.862234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:43.382 [2024-12-06 13:24:49.862246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:30:43.382 [2024-12-06 13:24:49.862258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:49.918401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:49.918467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:43.641 [2024-12-06 13:24:49.918487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.045 ms 00:30:43.641 [2024-12-06 13:24:49.918498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:49.918577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:49.918594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:43.641 [2024-12-06 13:24:49.918614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:43.641 [2024-12-06 13:24:49.918625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:49.919051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:49.919080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:43.641 [2024-12-06 13:24:49.919094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:30:43.641 [2024-12-06 13:24:49.919105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:49.919264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:49.919292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:43.641 [2024-12-06 13:24:49.919313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:30:43.641 [2024-12-06 13:24:49.919324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:49.936380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:49.936438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:43.641 [2024-12-06 13:24:49.936457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.025 ms 00:30:43.641 [2024-12-06 13:24:49.936468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:49.952645] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:43.641 [2024-12-06 13:24:49.952703] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:43.641 [2024-12-06 13:24:49.952736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:49.952747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:43.641 [2024-12-06 13:24:49.952759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.110 ms 00:30:43.641 [2024-12-06 13:24:49.952769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:49.982965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:49.983018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:43.641 [2024-12-06 13:24:49.983053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.148 ms 00:30:43.641 [2024-12-06 13:24:49.983066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:49.998977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:49.999019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:43.641 [2024-12-06 13:24:49.999052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.809 ms 00:30:43.641 [2024-12-06 13:24:49.999063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:50.014744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:50.014791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:43.641 [2024-12-06 13:24:50.014808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.635 ms 00:30:43.641 [2024-12-06 13:24:50.014820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:50.015664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:50.015704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:43.641 [2024-12-06 13:24:50.015725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:30:43.641 [2024-12-06 13:24:50.015737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:50.088788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:50.088870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:43.641 [2024-12-06 13:24:50.088914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.025 ms 00:30:43.641 [2024-12-06 13:24:50.088925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:50.101370] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:43.641 [2024-12-06 13:24:50.103957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:50.103992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:43.641 [2024-12-06 13:24:50.104041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.961 ms 00:30:43.641 [2024-12-06 13:24:50.104052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:50.104198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:50.104218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:43.641 [2024-12-06 13:24:50.104237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:43.641 [2024-12-06 13:24:50.104248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:50.104339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:50.104358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:43.641 [2024-12-06 13:24:50.104371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:43.641 [2024-12-06 13:24:50.104381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.641 [2024-12-06 13:24:50.104412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.641 [2024-12-06 13:24:50.104433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:43.642 [2024-12-06 13:24:50.104446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:43.642 [2024-12-06 13:24:50.104457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.642 [2024-12-06 13:24:50.104505] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:43.642 [2024-12-06 13:24:50.104522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.642 [2024-12-06 13:24:50.104533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:43.642 [2024-12-06 13:24:50.104545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:43.642 [2024-12-06 13:24:50.104556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.642 [2024-12-06 13:24:50.134618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.642 [2024-12-06 13:24:50.134682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:43.642 [2024-12-06 13:24:50.134722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.037 ms 00:30:43.642 [2024-12-06 13:24:50.134735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.642 [2024-12-06 13:24:50.134847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.642 [2024-12-06 13:24:50.134881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:43.642 [2024-12-06 13:24:50.134933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:43.642 [2024-12-06 13:24:50.134945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.642 [2024-12-06 13:24:50.136129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.209 ms, result 0 00:30:45.020  [2024-12-06T13:24:52.484Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-06T13:24:53.421Z] Copying: 53/1024 [MB] (27 MBps) [2024-12-06T13:24:54.359Z] Copying: 80/1024 [MB] (27 MBps) [2024-12-06T13:24:55.296Z] Copying: 105/1024 [MB] (25 MBps) [2024-12-06T13:24:56.230Z] Copying: 132/1024 [MB] (26 MBps) [2024-12-06T13:24:57.253Z] Copying: 157/1024 [MB] (25 MBps) [2024-12-06T13:24:58.190Z] Copying: 182/1024 [MB] (25 MBps) [2024-12-06T13:24:59.567Z] Copying: 208/1024 [MB] (25 MBps) [2024-12-06T13:25:00.503Z] Copying: 233/1024 [MB] (25 MBps) [2024-12-06T13:25:01.439Z] Copying: 259/1024 [MB] (26 MBps) [2024-12-06T13:25:02.375Z] Copying: 283/1024 [MB] (24 MBps) [2024-12-06T13:25:03.339Z] Copying: 308/1024 [MB] (24 MBps) [2024-12-06T13:25:04.319Z] Copying: 334/1024 [MB] (25 MBps) [2024-12-06T13:25:05.264Z] Copying: 360/1024 [MB] (25 MBps) [2024-12-06T13:25:06.201Z] Copying: 384/1024 [MB] (24 MBps) [2024-12-06T13:25:07.572Z] Copying: 409/1024 [MB] (24 MBps) [2024-12-06T13:25:08.505Z] Copying: 434/1024 [MB] (25 MBps) [2024-12-06T13:25:09.439Z] Copying: 460/1024 [MB] (26 MBps) [2024-12-06T13:25:10.373Z] Copying: 485/1024 [MB] (24 MBps) [2024-12-06T13:25:11.349Z] Copying: 511/1024 [MB] (26 MBps) [2024-12-06T13:25:12.281Z] Copying: 540/1024 [MB] (29 MBps) [2024-12-06T13:25:13.217Z] Copying: 566/1024 [MB] (26 MBps) [2024-12-06T13:25:14.153Z] Copying: 593/1024 [MB] (26 MBps) [2024-12-06T13:25:15.528Z] Copying: 620/1024 [MB] (27 MBps) [2024-12-06T13:25:16.463Z] Copying: 645/1024 [MB] (24 MBps) [2024-12-06T13:25:17.397Z] Copying: 671/1024 [MB] (26 MBps) [2024-12-06T13:25:18.333Z] Copying: 697/1024 [MB] (25 MBps) [2024-12-06T13:25:19.267Z] Copying: 724/1024 [MB] (26 MBps) [2024-12-06T13:25:20.202Z] Copying: 752/1024 [MB] (27 MBps) [2024-12-06T13:25:21.574Z] Copying: 782/1024 [MB] (30 MBps) [2024-12-06T13:25:22.507Z] Copying: 809/1024 [MB] (27 MBps) [2024-12-06T13:25:23.439Z] Copying: 837/1024 [MB] (27 MBps) [2024-12-06T13:25:24.370Z] Copying: 863/1024 [MB] (25 MBps) [2024-12-06T13:25:25.302Z] Copying: 889/1024 [MB] (26 MBps) [2024-12-06T13:25:26.233Z] Copying: 914/1024 [MB] (24 MBps) [2024-12-06T13:25:27.165Z] Copying: 940/1024 [MB] (26 MBps) [2024-12-06T13:25:28.536Z] Copying: 965/1024 [MB] (25 MBps) [2024-12-06T13:25:29.470Z] Copying: 992/1024 [MB] (26 MBps) [2024-12-06T13:25:30.404Z] Copying: 1015/1024 [MB] (23 MBps) [2024-12-06T13:25:31.015Z] Copying: 1048108/1048576 [kB] (7852 kBps) [2024-12-06T13:25:31.015Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-06 13:25:30.770042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.487 [2024-12-06 13:25:30.770132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:24.487 [2024-12-06 13:25:30.770175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:24.487 [2024-12-06 13:25:30.770188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.487 [2024-12-06 13:25:30.773758] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:24.487 [2024-12-06 13:25:30.779342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.487 [2024-12-06 13:25:30.779399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:24.487 [2024-12-06 13:25:30.779428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.521 ms 00:31:24.487 [2024-12-06 13:25:30.779451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.487 [2024-12-06 13:25:30.794415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.487 [2024-12-06 13:25:30.794499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:24.487 [2024-12-06 13:25:30.794532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.162 ms 00:31:24.487 [2024-12-06 13:25:30.794563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.487 [2024-12-06 13:25:30.816636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.487 [2024-12-06 13:25:30.816755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:24.487 [2024-12-06 13:25:30.816780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.039 ms 00:31:24.487 [2024-12-06 13:25:30.816792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.487 [2024-12-06 13:25:30.825008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.487 [2024-12-06 13:25:30.825063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:24.487 [2024-12-06 13:25:30.825083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.139 ms 00:31:24.487 [2024-12-06 13:25:30.825112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.487 [2024-12-06 13:25:30.859171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.487 [2024-12-06 13:25:30.859246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:24.487 [2024-12-06 13:25:30.859267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.993 ms 00:31:24.487 [2024-12-06 13:25:30.859279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.487 [2024-12-06 13:25:30.878516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.488 [2024-12-06 13:25:30.878591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:24.488 [2024-12-06 13:25:30.878612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.160 ms 00:31:24.488 [2024-12-06 13:25:30.878625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.488 [2024-12-06 13:25:30.957823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.488 [2024-12-06 13:25:30.957962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:24.488 [2024-12-06 13:25:30.957985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.119 ms 00:31:24.488 [2024-12-06 13:25:30.957997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.488 [2024-12-06 13:25:30.999829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.488 [2024-12-06 13:25:30.999941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:24.488 [2024-12-06 13:25:30.999971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.804 ms 00:31:24.488 [2024-12-06 13:25:30.999988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.748 [2024-12-06 13:25:31.041417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.748 [2024-12-06 13:25:31.041524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:24.748 [2024-12-06 13:25:31.041562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.337 ms 00:31:24.748 [2024-12-06 13:25:31.041587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.748 [2024-12-06 13:25:31.081652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.748 [2024-12-06 13:25:31.081730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:24.748 [2024-12-06 13:25:31.081755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.965 ms 00:31:24.748 [2024-12-06 13:25:31.081779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.748 [2024-12-06 13:25:31.118519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.748 [2024-12-06 13:25:31.118616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:24.748 [2024-12-06 13:25:31.118638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.554 ms 00:31:24.748 [2024-12-06 13:25:31.118649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.748 [2024-12-06 13:25:31.118727] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:24.748 [2024-12-06 13:25:31.118776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127744 / 261120 wr_cnt: 1 state: open 00:31:24.748 [2024-12-06 13:25:31.118799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.118995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:24.748 [2024-12-06 13:25:31.119442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.119990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:24.749 [2024-12-06 13:25:31.120555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:24.749 [2024-12-06 13:25:31.120572] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d953d101-c147-4f86-bca3-652dd3007b5e 00:31:24.749 [2024-12-06 13:25:31.120589] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127744 00:31:24.749 [2024-12-06 13:25:31.120603] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128704 00:31:24.749 [2024-12-06 13:25:31.120613] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127744 00:31:24.749 [2024-12-06 13:25:31.120631] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:31:24.749 [2024-12-06 13:25:31.120678] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:24.749 [2024-12-06 13:25:31.120699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:24.749 [2024-12-06 13:25:31.120735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:24.749 [2024-12-06 13:25:31.120756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:24.749 [2024-12-06 13:25:31.120773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:24.749 [2024-12-06 13:25:31.120791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.749 [2024-12-06 13:25:31.120810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:24.749 [2024-12-06 13:25:31.120833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:31:24.749 [2024-12-06 13:25:31.120879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.749 [2024-12-06 13:25:31.138462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.749 [2024-12-06 13:25:31.138533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:24.749 [2024-12-06 13:25:31.138566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.488 ms 00:31:24.749 [2024-12-06 13:25:31.138578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.749 [2024-12-06 13:25:31.139141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.749 [2024-12-06 13:25:31.139180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:24.749 [2024-12-06 13:25:31.139196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:31:24.749 [2024-12-06 13:25:31.139208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.749 [2024-12-06 13:25:31.193418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.749 [2024-12-06 13:25:31.193492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:24.749 [2024-12-06 13:25:31.193521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.749 [2024-12-06 13:25:31.193534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.749 [2024-12-06 13:25:31.193617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.749 [2024-12-06 13:25:31.193633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:24.749 [2024-12-06 13:25:31.193645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.749 [2024-12-06 13:25:31.193656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.749 [2024-12-06 13:25:31.193777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.749 [2024-12-06 13:25:31.193804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:24.749 [2024-12-06 13:25:31.193826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.749 [2024-12-06 13:25:31.193853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.749 [2024-12-06 13:25:31.193881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.749 [2024-12-06 13:25:31.193895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:24.749 [2024-12-06 13:25:31.193907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.749 [2024-12-06 13:25:31.193917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.009 [2024-12-06 13:25:31.320087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:25.009 [2024-12-06 13:25:31.320173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:25.009 [2024-12-06 13:25:31.320193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:25.009 [2024-12-06 13:25:31.320205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.009 [2024-12-06 13:25:31.412397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:25.009 [2024-12-06 13:25:31.412473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:25.009 [2024-12-06 13:25:31.412498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:25.009 [2024-12-06 13:25:31.412520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.009 [2024-12-06 13:25:31.412670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:25.009 [2024-12-06 13:25:31.412694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:25.009 [2024-12-06 13:25:31.412707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:25.009 [2024-12-06 13:25:31.412725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.009 [2024-12-06 13:25:31.412785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:25.009 [2024-12-06 13:25:31.412813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:25.009 [2024-12-06 13:25:31.412833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:25.009 [2024-12-06 13:25:31.412870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.009 [2024-12-06 13:25:31.413001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:25.009 [2024-12-06 13:25:31.413026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:25.009 [2024-12-06 13:25:31.413039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:25.009 [2024-12-06 13:25:31.413058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.009 [2024-12-06 13:25:31.413111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:25.009 [2024-12-06 13:25:31.413129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:25.009 [2024-12-06 13:25:31.413152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:25.009 [2024-12-06 13:25:31.413164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.009 [2024-12-06 13:25:31.413207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:25.009 [2024-12-06 13:25:31.413222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:25.009 [2024-12-06 13:25:31.413233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:25.009 [2024-12-06 13:25:31.413245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.009 [2024-12-06 13:25:31.413329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:25.009 [2024-12-06 13:25:31.413349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:25.009 [2024-12-06 13:25:31.413362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:25.009 [2024-12-06 13:25:31.413373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.009 [2024-12-06 13:25:31.413534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 647.244 ms, result 0 00:31:26.385 00:31:26.385 00:31:26.385 13:25:32 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:26.643 [2024-12-06 13:25:32.955684] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:31:26.643 [2024-12-06 13:25:32.955828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80947 ] 00:31:26.643 [2024-12-06 13:25:33.132276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.901 [2024-12-06 13:25:33.247987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.160 [2024-12-06 13:25:33.570059] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:27.160 [2024-12-06 13:25:33.570143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:27.422 [2024-12-06 13:25:33.730913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.730993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:27.422 [2024-12-06 13:25:33.731015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:27.422 [2024-12-06 13:25:33.731027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.731102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.731122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:27.422 [2024-12-06 13:25:33.731135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:27.422 [2024-12-06 13:25:33.731146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.731178] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:27.422 [2024-12-06 13:25:33.732137] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:27.422 [2024-12-06 13:25:33.732178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.732193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:27.422 [2024-12-06 13:25:33.732206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:31:27.422 [2024-12-06 13:25:33.732217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.733443] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:27.422 [2024-12-06 13:25:33.749862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.749931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:27.422 [2024-12-06 13:25:33.749951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.418 ms 00:31:27.422 [2024-12-06 13:25:33.749964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.750154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.750176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:27.422 [2024-12-06 13:25:33.750189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:27.422 [2024-12-06 13:25:33.750200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.754755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.754815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:27.422 [2024-12-06 13:25:33.754832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.431 ms 00:31:27.422 [2024-12-06 13:25:33.754878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.754987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.755009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:27.422 [2024-12-06 13:25:33.755022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:27.422 [2024-12-06 13:25:33.755034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.755106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.755130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:27.422 [2024-12-06 13:25:33.755143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:27.422 [2024-12-06 13:25:33.755154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.755193] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:27.422 [2024-12-06 13:25:33.759485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.759526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:27.422 [2024-12-06 13:25:33.759546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:31:27.422 [2024-12-06 13:25:33.759557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.759613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.759631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:27.422 [2024-12-06 13:25:33.759643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:27.422 [2024-12-06 13:25:33.759654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.759707] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:27.422 [2024-12-06 13:25:33.759739] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:27.422 [2024-12-06 13:25:33.759782] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:27.422 [2024-12-06 13:25:33.759805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:27.422 [2024-12-06 13:25:33.759942] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:27.422 [2024-12-06 13:25:33.759966] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:27.422 [2024-12-06 13:25:33.759982] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:27.422 [2024-12-06 13:25:33.759996] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:27.422 [2024-12-06 13:25:33.760010] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:27.422 [2024-12-06 13:25:33.760021] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:27.422 [2024-12-06 13:25:33.760032] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:27.422 [2024-12-06 13:25:33.760048] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:27.422 [2024-12-06 13:25:33.760058] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:27.422 [2024-12-06 13:25:33.760071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.760082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:27.422 [2024-12-06 13:25:33.760093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:31:27.422 [2024-12-06 13:25:33.760105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.760206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.422 [2024-12-06 13:25:33.760231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:27.422 [2024-12-06 13:25:33.760244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:27.422 [2024-12-06 13:25:33.760255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.422 [2024-12-06 13:25:33.760416] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:27.422 [2024-12-06 13:25:33.760446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:27.422 [2024-12-06 13:25:33.760460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:27.422 [2024-12-06 13:25:33.760471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.422 [2024-12-06 13:25:33.760483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:27.422 [2024-12-06 13:25:33.760493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:27.422 [2024-12-06 13:25:33.760503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:27.422 [2024-12-06 13:25:33.760514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:27.422 [2024-12-06 13:25:33.760524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:27.422 [2024-12-06 13:25:33.760534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:27.422 [2024-12-06 13:25:33.760544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:27.422 [2024-12-06 13:25:33.760554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:27.422 [2024-12-06 13:25:33.760564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:27.422 [2024-12-06 13:25:33.760587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:27.422 [2024-12-06 13:25:33.760598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:27.422 [2024-12-06 13:25:33.760608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.422 [2024-12-06 13:25:33.760618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:27.422 [2024-12-06 13:25:33.760628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:27.422 [2024-12-06 13:25:33.760638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.422 [2024-12-06 13:25:33.760648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:27.422 [2024-12-06 13:25:33.760658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:27.422 [2024-12-06 13:25:33.760669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.422 [2024-12-06 13:25:33.760679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:27.422 [2024-12-06 13:25:33.760689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:27.422 [2024-12-06 13:25:33.760699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.423 [2024-12-06 13:25:33.760709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:27.423 [2024-12-06 13:25:33.760720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:27.423 [2024-12-06 13:25:33.760729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.423 [2024-12-06 13:25:33.760739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:27.423 [2024-12-06 13:25:33.760749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:27.423 [2024-12-06 13:25:33.760759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.423 [2024-12-06 13:25:33.760769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:27.423 [2024-12-06 13:25:33.760779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:27.423 [2024-12-06 13:25:33.760788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:27.423 [2024-12-06 13:25:33.760798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:27.423 [2024-12-06 13:25:33.760808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:27.423 [2024-12-06 13:25:33.760818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:27.423 [2024-12-06 13:25:33.760828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:27.423 [2024-12-06 13:25:33.760853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:27.423 [2024-12-06 13:25:33.760866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.423 [2024-12-06 13:25:33.760877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:27.423 [2024-12-06 13:25:33.760887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:27.423 [2024-12-06 13:25:33.760897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.423 [2024-12-06 13:25:33.760907] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:27.423 [2024-12-06 13:25:33.760918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:27.423 [2024-12-06 13:25:33.760929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:27.423 [2024-12-06 13:25:33.760940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.423 [2024-12-06 13:25:33.760952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:27.423 [2024-12-06 13:25:33.760962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:27.423 [2024-12-06 13:25:33.760972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:27.423 [2024-12-06 13:25:33.760983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:27.423 [2024-12-06 13:25:33.760993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:27.423 [2024-12-06 13:25:33.761003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:27.423 [2024-12-06 13:25:33.761015] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:27.423 [2024-12-06 13:25:33.761029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.423 [2024-12-06 13:25:33.761047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:27.423 [2024-12-06 13:25:33.761058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:27.423 [2024-12-06 13:25:33.761069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:27.423 [2024-12-06 13:25:33.761081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:27.423 [2024-12-06 13:25:33.761092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:27.423 [2024-12-06 13:25:33.761103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:27.423 [2024-12-06 13:25:33.761114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:27.423 [2024-12-06 13:25:33.761125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:27.423 [2024-12-06 13:25:33.761136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:27.423 [2024-12-06 13:25:33.761147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:27.423 [2024-12-06 13:25:33.761158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:27.423 [2024-12-06 13:25:33.761168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:27.423 [2024-12-06 13:25:33.761179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:27.423 [2024-12-06 13:25:33.761191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:27.423 [2024-12-06 13:25:33.761201] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:27.423 [2024-12-06 13:25:33.761214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.423 [2024-12-06 13:25:33.761226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:27.423 [2024-12-06 13:25:33.761237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:27.423 [2024-12-06 13:25:33.761249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:27.423 [2024-12-06 13:25:33.761260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:27.423 [2024-12-06 13:25:33.761272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.761285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:27.423 [2024-12-06 13:25:33.761296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:31:27.423 [2024-12-06 13:25:33.761307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.423 [2024-12-06 13:25:33.794327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.794399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:27.423 [2024-12-06 13:25:33.794420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.953 ms 00:31:27.423 [2024-12-06 13:25:33.794438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.423 [2024-12-06 13:25:33.794557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.794573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:27.423 [2024-12-06 13:25:33.794586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:27.423 [2024-12-06 13:25:33.794596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.423 [2024-12-06 13:25:33.841848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.841918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:27.423 [2024-12-06 13:25:33.841940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.142 ms 00:31:27.423 [2024-12-06 13:25:33.841952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.423 [2024-12-06 13:25:33.842034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.842051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:27.423 [2024-12-06 13:25:33.842071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:27.423 [2024-12-06 13:25:33.842083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.423 [2024-12-06 13:25:33.842487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.842517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:27.423 [2024-12-06 13:25:33.842532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:31:27.423 [2024-12-06 13:25:33.842543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.423 [2024-12-06 13:25:33.842704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.842734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:27.423 [2024-12-06 13:25:33.842757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:31:27.423 [2024-12-06 13:25:33.842768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.423 [2024-12-06 13:25:33.859636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.859710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:27.423 [2024-12-06 13:25:33.859731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.836 ms 00:31:27.423 [2024-12-06 13:25:33.859744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.423 [2024-12-06 13:25:33.876196] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:27.423 [2024-12-06 13:25:33.876260] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:27.423 [2024-12-06 13:25:33.876281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.876294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:27.423 [2024-12-06 13:25:33.876310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.329 ms 00:31:27.423 [2024-12-06 13:25:33.876321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.423 [2024-12-06 13:25:33.906352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.906436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:27.423 [2024-12-06 13:25:33.906457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.904 ms 00:31:27.423 [2024-12-06 13:25:33.906470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.423 [2024-12-06 13:25:33.922801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.423 [2024-12-06 13:25:33.922886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:27.423 [2024-12-06 13:25:33.922907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.237 ms 00:31:27.423 [2024-12-06 13:25:33.922919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.424 [2024-12-06 13:25:33.938760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.424 [2024-12-06 13:25:33.938830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:27.424 [2024-12-06 13:25:33.938860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.683 ms 00:31:27.424 [2024-12-06 13:25:33.938873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.424 [2024-12-06 13:25:33.939737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.424 [2024-12-06 13:25:33.939776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:27.424 [2024-12-06 13:25:33.939797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:31:27.424 [2024-12-06 13:25:33.939808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.683 [2024-12-06 13:25:34.013868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.683 [2024-12-06 13:25:34.013949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:27.683 [2024-12-06 13:25:34.013982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.024 ms 00:31:27.683 [2024-12-06 13:25:34.013995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.683 [2024-12-06 13:25:34.027011] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:27.683 [2024-12-06 13:25:34.029725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.683 [2024-12-06 13:25:34.029770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:27.683 [2024-12-06 13:25:34.029788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.653 ms 00:31:27.683 [2024-12-06 13:25:34.029800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.683 [2024-12-06 13:25:34.029954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.683 [2024-12-06 13:25:34.029976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:27.683 [2024-12-06 13:25:34.029993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:27.683 [2024-12-06 13:25:34.030004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.683 [2024-12-06 13:25:34.031628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.683 [2024-12-06 13:25:34.031674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:27.683 [2024-12-06 13:25:34.031688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.566 ms 00:31:27.683 [2024-12-06 13:25:34.031700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.683 [2024-12-06 13:25:34.031739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.683 [2024-12-06 13:25:34.031754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:27.683 [2024-12-06 13:25:34.031767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:27.683 [2024-12-06 13:25:34.031777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.683 [2024-12-06 13:25:34.031827] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:27.683 [2024-12-06 13:25:34.031859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.683 [2024-12-06 13:25:34.031873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:27.683 [2024-12-06 13:25:34.031884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:27.683 [2024-12-06 13:25:34.031895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.683 [2024-12-06 13:25:34.063544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.683 [2024-12-06 13:25:34.063623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:27.683 [2024-12-06 13:25:34.063651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.619 ms 00:31:27.683 [2024-12-06 13:25:34.063664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.683 [2024-12-06 13:25:34.063774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.683 [2024-12-06 13:25:34.063793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:27.683 [2024-12-06 13:25:34.063806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:27.683 [2024-12-06 13:25:34.063817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.683 [2024-12-06 13:25:34.066503] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 334.248 ms, result 0 00:31:29.060  [2024-12-06T13:25:36.522Z] Copying: 28/1024 [MB] (28 MBps) [2024-12-06T13:25:37.457Z] Copying: 54/1024 [MB] (26 MBps) [2024-12-06T13:25:38.391Z] Copying: 82/1024 [MB] (27 MBps) [2024-12-06T13:25:39.326Z] Copying: 109/1024 [MB] (27 MBps) [2024-12-06T13:25:40.338Z] Copying: 139/1024 [MB] (29 MBps) [2024-12-06T13:25:41.711Z] Copying: 165/1024 [MB] (26 MBps) [2024-12-06T13:25:42.645Z] Copying: 191/1024 [MB] (26 MBps) [2024-12-06T13:25:43.578Z] Copying: 216/1024 [MB] (25 MBps) [2024-12-06T13:25:44.513Z] Copying: 244/1024 [MB] (27 MBps) [2024-12-06T13:25:45.443Z] Copying: 269/1024 [MB] (25 MBps) [2024-12-06T13:25:46.374Z] Copying: 296/1024 [MB] (27 MBps) [2024-12-06T13:25:47.312Z] Copying: 320/1024 [MB] (23 MBps) [2024-12-06T13:25:48.727Z] Copying: 346/1024 [MB] (26 MBps) [2024-12-06T13:25:49.294Z] Copying: 369/1024 [MB] (22 MBps) [2024-12-06T13:25:50.673Z] Copying: 394/1024 [MB] (25 MBps) [2024-12-06T13:25:51.610Z] Copying: 419/1024 [MB] (25 MBps) [2024-12-06T13:25:52.548Z] Copying: 444/1024 [MB] (24 MBps) [2024-12-06T13:25:53.485Z] Copying: 470/1024 [MB] (25 MBps) [2024-12-06T13:25:54.421Z] Copying: 496/1024 [MB] (25 MBps) [2024-12-06T13:25:55.355Z] Copying: 521/1024 [MB] (25 MBps) [2024-12-06T13:25:56.320Z] Copying: 548/1024 [MB] (26 MBps) [2024-12-06T13:25:57.695Z] Copying: 573/1024 [MB] (25 MBps) [2024-12-06T13:25:58.630Z] Copying: 599/1024 [MB] (25 MBps) [2024-12-06T13:25:59.564Z] Copying: 625/1024 [MB] (26 MBps) [2024-12-06T13:26:00.501Z] Copying: 651/1024 [MB] (26 MBps) [2024-12-06T13:26:01.438Z] Copying: 676/1024 [MB] (24 MBps) [2024-12-06T13:26:02.373Z] Copying: 700/1024 [MB] (24 MBps) [2024-12-06T13:26:03.370Z] Copying: 723/1024 [MB] (22 MBps) [2024-12-06T13:26:04.316Z] Copying: 745/1024 [MB] (22 MBps) [2024-12-06T13:26:05.686Z] Copying: 770/1024 [MB] (24 MBps) [2024-12-06T13:26:06.616Z] Copying: 795/1024 [MB] (25 MBps) [2024-12-06T13:26:07.550Z] Copying: 821/1024 [MB] (26 MBps) [2024-12-06T13:26:08.484Z] Copying: 846/1024 [MB] (25 MBps) [2024-12-06T13:26:09.417Z] Copying: 872/1024 [MB] (25 MBps) [2024-12-06T13:26:10.354Z] Copying: 898/1024 [MB] (25 MBps) [2024-12-06T13:26:11.729Z] Copying: 924/1024 [MB] (26 MBps) [2024-12-06T13:26:12.297Z] Copying: 951/1024 [MB] (26 MBps) [2024-12-06T13:26:13.675Z] Copying: 974/1024 [MB] (23 MBps) [2024-12-06T13:26:14.243Z] Copying: 1001/1024 [MB] (26 MBps) [2024-12-06T13:26:14.501Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-06 13:26:14.280120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.973 [2024-12-06 13:26:14.280210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:07.973 [2024-12-06 13:26:14.280242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:07.973 [2024-12-06 13:26:14.280255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.973 [2024-12-06 13:26:14.280291] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:07.973 [2024-12-06 13:26:14.283707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.973 [2024-12-06 13:26:14.283749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:07.973 [2024-12-06 13:26:14.283766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.389 ms 00:32:07.973 [2024-12-06 13:26:14.283777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.973 [2024-12-06 13:26:14.284948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.973 [2024-12-06 13:26:14.284991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:07.973 [2024-12-06 13:26:14.285008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:32:07.973 [2024-12-06 13:26:14.285028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.973 [2024-12-06 13:26:14.288998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.973 [2024-12-06 13:26:14.289041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:07.973 [2024-12-06 13:26:14.289058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.946 ms 00:32:07.973 [2024-12-06 13:26:14.289070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.973 [2024-12-06 13:26:14.295786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.973 [2024-12-06 13:26:14.295826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:07.973 [2024-12-06 13:26:14.295851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.674 ms 00:32:07.973 [2024-12-06 13:26:14.295873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.973 [2024-12-06 13:26:14.327225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.973 [2024-12-06 13:26:14.327273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:07.973 [2024-12-06 13:26:14.327291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.267 ms 00:32:07.973 [2024-12-06 13:26:14.327302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.973 [2024-12-06 13:26:14.345274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.973 [2024-12-06 13:26:14.345321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:07.973 [2024-12-06 13:26:14.345350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.925 ms 00:32:07.973 [2024-12-06 13:26:14.345362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.973 [2024-12-06 13:26:14.450534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.973 [2024-12-06 13:26:14.450636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:07.973 [2024-12-06 13:26:14.450661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.117 ms 00:32:07.973 [2024-12-06 13:26:14.450673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.973 [2024-12-06 13:26:14.482670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.973 [2024-12-06 13:26:14.482720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:07.973 [2024-12-06 13:26:14.482740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.973 ms 00:32:07.973 [2024-12-06 13:26:14.482752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.233 [2024-12-06 13:26:14.514101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.233 [2024-12-06 13:26:14.514149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:08.233 [2024-12-06 13:26:14.514168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.302 ms 00:32:08.233 [2024-12-06 13:26:14.514179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.233 [2024-12-06 13:26:14.544984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.233 [2024-12-06 13:26:14.545027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:08.233 [2024-12-06 13:26:14.545045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.758 ms 00:32:08.233 [2024-12-06 13:26:14.545056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.233 [2024-12-06 13:26:14.576143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.233 [2024-12-06 13:26:14.576194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:08.233 [2024-12-06 13:26:14.576213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.993 ms 00:32:08.233 [2024-12-06 13:26:14.576225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.233 [2024-12-06 13:26:14.576271] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:08.233 [2024-12-06 13:26:14.576295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:08.233 [2024-12-06 13:26:14.576309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:08.233 [2024-12-06 13:26:14.576677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.576993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:08.234 [2024-12-06 13:26:14.577480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:08.234 [2024-12-06 13:26:14.577492] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d953d101-c147-4f86-bca3-652dd3007b5e 00:32:08.234 [2024-12-06 13:26:14.577504] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:08.234 [2024-12-06 13:26:14.577514] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4288 00:32:08.234 [2024-12-06 13:26:14.577525] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3328 00:32:08.234 [2024-12-06 13:26:14.577537] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.2885 00:32:08.234 [2024-12-06 13:26:14.577556] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:08.234 [2024-12-06 13:26:14.577580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:08.234 [2024-12-06 13:26:14.577591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:08.234 [2024-12-06 13:26:14.577601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:08.234 [2024-12-06 13:26:14.577610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:08.234 [2024-12-06 13:26:14.577622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.234 [2024-12-06 13:26:14.577633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:08.234 [2024-12-06 13:26:14.577645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:32:08.234 [2024-12-06 13:26:14.577655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.234 [2024-12-06 13:26:14.594220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.234 [2024-12-06 13:26:14.594262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:08.234 [2024-12-06 13:26:14.594288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.521 ms 00:32:08.234 [2024-12-06 13:26:14.594300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.234 [2024-12-06 13:26:14.594734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.234 [2024-12-06 13:26:14.594759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:08.234 [2024-12-06 13:26:14.594773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:32:08.234 [2024-12-06 13:26:14.594785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.234 [2024-12-06 13:26:14.637931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.234 [2024-12-06 13:26:14.638001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:08.234 [2024-12-06 13:26:14.638020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.234 [2024-12-06 13:26:14.638032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.234 [2024-12-06 13:26:14.638110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.235 [2024-12-06 13:26:14.638126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:08.235 [2024-12-06 13:26:14.638138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.235 [2024-12-06 13:26:14.638148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.235 [2024-12-06 13:26:14.638240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.235 [2024-12-06 13:26:14.638259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:08.235 [2024-12-06 13:26:14.638279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.235 [2024-12-06 13:26:14.638290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.235 [2024-12-06 13:26:14.638312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.235 [2024-12-06 13:26:14.638326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:08.235 [2024-12-06 13:26:14.638337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.235 [2024-12-06 13:26:14.638348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.235 [2024-12-06 13:26:14.742092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.235 [2024-12-06 13:26:14.742170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:08.235 [2024-12-06 13:26:14.742189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.235 [2024-12-06 13:26:14.742201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.493 [2024-12-06 13:26:14.826875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.493 [2024-12-06 13:26:14.826933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:08.493 [2024-12-06 13:26:14.826953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.493 [2024-12-06 13:26:14.826964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.493 [2024-12-06 13:26:14.827070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.493 [2024-12-06 13:26:14.827089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:08.493 [2024-12-06 13:26:14.827101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.493 [2024-12-06 13:26:14.827117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.493 [2024-12-06 13:26:14.827164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.493 [2024-12-06 13:26:14.827180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:08.493 [2024-12-06 13:26:14.827192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.493 [2024-12-06 13:26:14.827203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.493 [2024-12-06 13:26:14.827335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.493 [2024-12-06 13:26:14.827355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:08.493 [2024-12-06 13:26:14.827367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.493 [2024-12-06 13:26:14.827378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.493 [2024-12-06 13:26:14.827455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.493 [2024-12-06 13:26:14.827477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:08.493 [2024-12-06 13:26:14.827490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.493 [2024-12-06 13:26:14.827501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.493 [2024-12-06 13:26:14.827545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.493 [2024-12-06 13:26:14.827561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:08.493 [2024-12-06 13:26:14.827572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.493 [2024-12-06 13:26:14.827584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.493 [2024-12-06 13:26:14.827652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:08.493 [2024-12-06 13:26:14.827671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:08.493 [2024-12-06 13:26:14.827682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:08.493 [2024-12-06 13:26:14.827693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.493 [2024-12-06 13:26:14.827876] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 547.683 ms, result 0 00:32:09.427 00:32:09.427 00:32:09.428 13:26:15 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:11.959 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:11.959 13:26:17 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:11.959 13:26:17 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:32:11.959 13:26:17 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:11.959 13:26:18 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:11.959 13:26:18 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:11.959 13:26:18 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79459 00:32:11.959 13:26:18 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79459 ']' 00:32:11.959 13:26:18 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79459 00:32:11.959 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79459) - No such process 00:32:11.959 13:26:18 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79459 is not found' 00:32:11.959 Process with pid 79459 is not found 00:32:11.959 13:26:18 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:32:11.959 Remove shared memory files 00:32:11.959 13:26:18 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:11.959 13:26:18 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:32:11.959 13:26:18 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:32:11.959 13:26:18 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:32:11.959 13:26:18 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:11.959 13:26:18 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:32:11.959 00:32:11.959 real 3m11.406s 00:32:11.959 user 2m56.369s 00:32:11.959 sys 0m17.181s 00:32:11.959 13:26:18 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.959 13:26:18 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:32:11.959 ************************************ 00:32:11.959 END TEST ftl_restore 00:32:11.959 ************************************ 00:32:11.959 13:26:18 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:32:11.959 13:26:18 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:11.959 13:26:18 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.959 13:26:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:11.959 ************************************ 00:32:11.959 START TEST ftl_dirty_shutdown 00:32:11.959 ************************************ 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:32:11.959 * Looking for test storage... 00:32:11.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:11.959 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:11.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.960 --rc genhtml_branch_coverage=1 00:32:11.960 --rc genhtml_function_coverage=1 00:32:11.960 --rc genhtml_legend=1 00:32:11.960 --rc geninfo_all_blocks=1 00:32:11.960 --rc geninfo_unexecuted_blocks=1 00:32:11.960 00:32:11.960 ' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:11.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.960 --rc genhtml_branch_coverage=1 00:32:11.960 --rc genhtml_function_coverage=1 00:32:11.960 --rc genhtml_legend=1 00:32:11.960 --rc geninfo_all_blocks=1 00:32:11.960 --rc geninfo_unexecuted_blocks=1 00:32:11.960 00:32:11.960 ' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:11.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.960 --rc genhtml_branch_coverage=1 00:32:11.960 --rc genhtml_function_coverage=1 00:32:11.960 --rc genhtml_legend=1 00:32:11.960 --rc geninfo_all_blocks=1 00:32:11.960 --rc geninfo_unexecuted_blocks=1 00:32:11.960 00:32:11.960 ' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:11.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.960 --rc genhtml_branch_coverage=1 00:32:11.960 --rc genhtml_function_coverage=1 00:32:11.960 --rc genhtml_legend=1 00:32:11.960 --rc geninfo_all_blocks=1 00:32:11.960 --rc geninfo_unexecuted_blocks=1 00:32:11.960 00:32:11.960 ' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81462 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81462 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81462 ']' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.960 13:26:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:11.960 [2024-12-06 13:26:18.474135] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:32:11.960 [2024-12-06 13:26:18.474283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81462 ] 00:32:12.528 [2024-12-06 13:26:18.749182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.528 [2024-12-06 13:26:18.872587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.476 13:26:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:13.476 13:26:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:13.476 13:26:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:13.476 13:26:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:32:13.476 13:26:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:13.476 13:26:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:32:13.476 13:26:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:13.476 13:26:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:13.743 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:13.743 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:13.743 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:13.743 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:32:13.743 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:13.743 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:13.743 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:13.744 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:14.002 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:14.002 { 00:32:14.002 "name": "nvme0n1", 00:32:14.002 "aliases": [ 00:32:14.002 "6fee034f-41c6-41fa-9684-017def248cac" 00:32:14.002 ], 00:32:14.002 "product_name": "NVMe disk", 00:32:14.002 "block_size": 4096, 00:32:14.002 "num_blocks": 1310720, 00:32:14.002 "uuid": "6fee034f-41c6-41fa-9684-017def248cac", 00:32:14.002 "numa_id": -1, 00:32:14.002 "assigned_rate_limits": { 00:32:14.002 "rw_ios_per_sec": 0, 00:32:14.002 "rw_mbytes_per_sec": 0, 00:32:14.002 "r_mbytes_per_sec": 0, 00:32:14.002 "w_mbytes_per_sec": 0 00:32:14.002 }, 00:32:14.002 "claimed": true, 00:32:14.002 "claim_type": "read_many_write_one", 00:32:14.002 "zoned": false, 00:32:14.002 "supported_io_types": { 00:32:14.002 "read": true, 00:32:14.002 "write": true, 00:32:14.002 "unmap": true, 00:32:14.002 "flush": true, 00:32:14.002 "reset": true, 00:32:14.002 "nvme_admin": true, 00:32:14.002 "nvme_io": true, 00:32:14.002 "nvme_io_md": false, 00:32:14.002 "write_zeroes": true, 00:32:14.002 "zcopy": false, 00:32:14.002 "get_zone_info": false, 00:32:14.002 "zone_management": false, 00:32:14.002 "zone_append": false, 00:32:14.002 "compare": true, 00:32:14.002 "compare_and_write": false, 00:32:14.002 "abort": true, 00:32:14.002 "seek_hole": false, 00:32:14.002 "seek_data": false, 00:32:14.002 "copy": true, 00:32:14.002 "nvme_iov_md": false 00:32:14.002 }, 00:32:14.002 "driver_specific": { 00:32:14.002 "nvme": [ 00:32:14.002 { 00:32:14.002 "pci_address": "0000:00:11.0", 00:32:14.002 "trid": { 00:32:14.002 "trtype": "PCIe", 00:32:14.002 "traddr": "0000:00:11.0" 00:32:14.002 }, 00:32:14.002 "ctrlr_data": { 00:32:14.002 "cntlid": 0, 00:32:14.002 "vendor_id": "0x1b36", 00:32:14.002 "model_number": "QEMU NVMe Ctrl", 00:32:14.002 "serial_number": "12341", 00:32:14.003 "firmware_revision": "8.0.0", 00:32:14.003 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:14.003 "oacs": { 00:32:14.003 "security": 0, 00:32:14.003 "format": 1, 00:32:14.003 "firmware": 0, 00:32:14.003 "ns_manage": 1 00:32:14.003 }, 00:32:14.003 "multi_ctrlr": false, 00:32:14.003 "ana_reporting": false 00:32:14.003 }, 00:32:14.003 "vs": { 00:32:14.003 "nvme_version": "1.4" 00:32:14.003 }, 00:32:14.003 "ns_data": { 00:32:14.003 "id": 1, 00:32:14.003 "can_share": false 00:32:14.003 } 00:32:14.003 } 00:32:14.003 ], 00:32:14.003 "mp_policy": "active_passive" 00:32:14.003 } 00:32:14.003 } 00:32:14.003 ]' 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:14.003 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:14.262 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=92c3b395-39ac-49da-b1ed-3e9284f5b174 00:32:14.262 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:14.262 13:26:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 92c3b395-39ac-49da-b1ed-3e9284f5b174 00:32:14.521 13:26:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=461d21bc-1f09-4639-8537-69b3c5c5194b 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 461d21bc-1f09-4639-8537-69b3c5c5194b 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:15.088 13:26:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:15.654 13:26:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:15.654 { 00:32:15.654 "name": "d8a77832-9e98-467f-8716-330cf66eb0eb", 00:32:15.654 "aliases": [ 00:32:15.654 "lvs/nvme0n1p0" 00:32:15.654 ], 00:32:15.654 "product_name": "Logical Volume", 00:32:15.654 "block_size": 4096, 00:32:15.654 "num_blocks": 26476544, 00:32:15.654 "uuid": "d8a77832-9e98-467f-8716-330cf66eb0eb", 00:32:15.654 "assigned_rate_limits": { 00:32:15.654 "rw_ios_per_sec": 0, 00:32:15.654 "rw_mbytes_per_sec": 0, 00:32:15.654 "r_mbytes_per_sec": 0, 00:32:15.654 "w_mbytes_per_sec": 0 00:32:15.654 }, 00:32:15.654 "claimed": false, 00:32:15.654 "zoned": false, 00:32:15.654 "supported_io_types": { 00:32:15.654 "read": true, 00:32:15.654 "write": true, 00:32:15.654 "unmap": true, 00:32:15.654 "flush": false, 00:32:15.654 "reset": true, 00:32:15.654 "nvme_admin": false, 00:32:15.654 "nvme_io": false, 00:32:15.654 "nvme_io_md": false, 00:32:15.654 "write_zeroes": true, 00:32:15.654 "zcopy": false, 00:32:15.654 "get_zone_info": false, 00:32:15.654 "zone_management": false, 00:32:15.654 "zone_append": false, 00:32:15.654 "compare": false, 00:32:15.654 "compare_and_write": false, 00:32:15.654 "abort": false, 00:32:15.654 "seek_hole": true, 00:32:15.654 "seek_data": true, 00:32:15.654 "copy": false, 00:32:15.655 "nvme_iov_md": false 00:32:15.655 }, 00:32:15.655 "driver_specific": { 00:32:15.655 "lvol": { 00:32:15.655 "lvol_store_uuid": "461d21bc-1f09-4639-8537-69b3c5c5194b", 00:32:15.655 "base_bdev": "nvme0n1", 00:32:15.655 "thin_provision": true, 00:32:15.655 "num_allocated_clusters": 0, 00:32:15.655 "snapshot": false, 00:32:15.655 "clone": false, 00:32:15.655 "esnap_clone": false 00:32:15.655 } 00:32:15.655 } 00:32:15.655 } 00:32:15.655 ]' 00:32:15.655 13:26:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:15.655 13:26:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:15.655 13:26:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:15.655 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:15.655 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:15.655 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:32:15.655 13:26:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:32:15.655 13:26:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:15.655 13:26:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:15.913 13:26:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:15.913 13:26:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:15.913 13:26:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:15.913 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:15.913 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:15.913 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:15.913 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:15.913 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:16.480 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:16.480 { 00:32:16.480 "name": "d8a77832-9e98-467f-8716-330cf66eb0eb", 00:32:16.480 "aliases": [ 00:32:16.480 "lvs/nvme0n1p0" 00:32:16.480 ], 00:32:16.480 "product_name": "Logical Volume", 00:32:16.480 "block_size": 4096, 00:32:16.480 "num_blocks": 26476544, 00:32:16.481 "uuid": "d8a77832-9e98-467f-8716-330cf66eb0eb", 00:32:16.481 "assigned_rate_limits": { 00:32:16.481 "rw_ios_per_sec": 0, 00:32:16.481 "rw_mbytes_per_sec": 0, 00:32:16.481 "r_mbytes_per_sec": 0, 00:32:16.481 "w_mbytes_per_sec": 0 00:32:16.481 }, 00:32:16.481 "claimed": false, 00:32:16.481 "zoned": false, 00:32:16.481 "supported_io_types": { 00:32:16.481 "read": true, 00:32:16.481 "write": true, 00:32:16.481 "unmap": true, 00:32:16.481 "flush": false, 00:32:16.481 "reset": true, 00:32:16.481 "nvme_admin": false, 00:32:16.481 "nvme_io": false, 00:32:16.481 "nvme_io_md": false, 00:32:16.481 "write_zeroes": true, 00:32:16.481 "zcopy": false, 00:32:16.481 "get_zone_info": false, 00:32:16.481 "zone_management": false, 00:32:16.481 "zone_append": false, 00:32:16.481 "compare": false, 00:32:16.481 "compare_and_write": false, 00:32:16.481 "abort": false, 00:32:16.481 "seek_hole": true, 00:32:16.481 "seek_data": true, 00:32:16.481 "copy": false, 00:32:16.481 "nvme_iov_md": false 00:32:16.481 }, 00:32:16.481 "driver_specific": { 00:32:16.481 "lvol": { 00:32:16.481 "lvol_store_uuid": "461d21bc-1f09-4639-8537-69b3c5c5194b", 00:32:16.481 "base_bdev": "nvme0n1", 00:32:16.481 "thin_provision": true, 00:32:16.481 "num_allocated_clusters": 0, 00:32:16.481 "snapshot": false, 00:32:16.481 "clone": false, 00:32:16.481 "esnap_clone": false 00:32:16.481 } 00:32:16.481 } 00:32:16.481 } 00:32:16.481 ]' 00:32:16.481 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:16.481 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:16.481 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:16.481 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:16.481 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:16.481 13:26:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:32:16.481 13:26:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:32:16.481 13:26:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:16.739 13:26:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:32:16.739 13:26:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:16.739 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:16.739 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:16.739 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:16.739 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:16.739 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d8a77832-9e98-467f-8716-330cf66eb0eb 00:32:16.998 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:16.998 { 00:32:16.998 "name": "d8a77832-9e98-467f-8716-330cf66eb0eb", 00:32:16.998 "aliases": [ 00:32:16.998 "lvs/nvme0n1p0" 00:32:16.998 ], 00:32:16.998 "product_name": "Logical Volume", 00:32:16.998 "block_size": 4096, 00:32:16.998 "num_blocks": 26476544, 00:32:16.998 "uuid": "d8a77832-9e98-467f-8716-330cf66eb0eb", 00:32:16.998 "assigned_rate_limits": { 00:32:16.998 "rw_ios_per_sec": 0, 00:32:16.998 "rw_mbytes_per_sec": 0, 00:32:16.998 "r_mbytes_per_sec": 0, 00:32:16.998 "w_mbytes_per_sec": 0 00:32:16.998 }, 00:32:16.998 "claimed": false, 00:32:16.998 "zoned": false, 00:32:16.998 "supported_io_types": { 00:32:16.998 "read": true, 00:32:16.998 "write": true, 00:32:16.998 "unmap": true, 00:32:16.998 "flush": false, 00:32:16.998 "reset": true, 00:32:16.998 "nvme_admin": false, 00:32:16.998 "nvme_io": false, 00:32:16.998 "nvme_io_md": false, 00:32:16.998 "write_zeroes": true, 00:32:16.998 "zcopy": false, 00:32:16.998 "get_zone_info": false, 00:32:16.998 "zone_management": false, 00:32:16.998 "zone_append": false, 00:32:16.998 "compare": false, 00:32:16.998 "compare_and_write": false, 00:32:16.998 "abort": false, 00:32:16.998 "seek_hole": true, 00:32:16.998 "seek_data": true, 00:32:16.998 "copy": false, 00:32:16.998 "nvme_iov_md": false 00:32:17.024 }, 00:32:17.024 "driver_specific": { 00:32:17.024 "lvol": { 00:32:17.024 "lvol_store_uuid": "461d21bc-1f09-4639-8537-69b3c5c5194b", 00:32:17.024 "base_bdev": "nvme0n1", 00:32:17.024 "thin_provision": true, 00:32:17.024 "num_allocated_clusters": 0, 00:32:17.024 "snapshot": false, 00:32:17.024 "clone": false, 00:32:17.024 "esnap_clone": false 00:32:17.024 } 00:32:17.024 } 00:32:17.024 } 00:32:17.024 ]' 00:32:17.024 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:17.281 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:17.281 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:17.281 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:17.281 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:17.281 13:26:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:32:17.281 13:26:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:32:17.281 13:26:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d8a77832-9e98-467f-8716-330cf66eb0eb --l2p_dram_limit 10' 00:32:17.281 13:26:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:32:17.281 13:26:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:32:17.281 13:26:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:32:17.282 13:26:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d8a77832-9e98-467f-8716-330cf66eb0eb --l2p_dram_limit 10 -c nvc0n1p0 00:32:17.540 [2024-12-06 13:26:23.869905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.540 [2024-12-06 13:26:23.870010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:17.540 [2024-12-06 13:26:23.870036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:17.540 [2024-12-06 13:26:23.870050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.540 [2024-12-06 13:26:23.870137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.540 [2024-12-06 13:26:23.870156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:17.541 [2024-12-06 13:26:23.870171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:17.541 [2024-12-06 13:26:23.870184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.541 [2024-12-06 13:26:23.870225] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:17.541 [2024-12-06 13:26:23.871253] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:17.541 [2024-12-06 13:26:23.871302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.541 [2024-12-06 13:26:23.871317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:17.541 [2024-12-06 13:26:23.871332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:32:17.541 [2024-12-06 13:26:23.871345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.541 [2024-12-06 13:26:23.871525] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID df65a8bd-07a1-4165-8672-9faf2c9274d0 00:32:17.541 [2024-12-06 13:26:23.872756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.541 [2024-12-06 13:26:23.872806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:17.541 [2024-12-06 13:26:23.872824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:17.541 [2024-12-06 13:26:23.872853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.541 [2024-12-06 13:26:23.878206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.541 [2024-12-06 13:26:23.878306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:17.541 [2024-12-06 13:26:23.878326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.262 ms 00:32:17.541 [2024-12-06 13:26:23.878340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.541 [2024-12-06 13:26:23.878515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.541 [2024-12-06 13:26:23.878542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:17.541 [2024-12-06 13:26:23.878558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:32:17.541 [2024-12-06 13:26:23.878578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.541 [2024-12-06 13:26:23.878703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.541 [2024-12-06 13:26:23.878742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:17.541 [2024-12-06 13:26:23.878761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:17.541 [2024-12-06 13:26:23.878775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.541 [2024-12-06 13:26:23.878822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:17.541 [2024-12-06 13:26:23.883770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.541 [2024-12-06 13:26:23.883856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:17.541 [2024-12-06 13:26:23.883882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.963 ms 00:32:17.541 [2024-12-06 13:26:23.883895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.541 [2024-12-06 13:26:23.883968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.541 [2024-12-06 13:26:23.883984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:17.541 [2024-12-06 13:26:23.884000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:17.541 [2024-12-06 13:26:23.884012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.541 [2024-12-06 13:26:23.884105] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:17.541 [2024-12-06 13:26:23.884276] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:17.541 [2024-12-06 13:26:23.884313] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:17.541 [2024-12-06 13:26:23.884331] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:17.541 [2024-12-06 13:26:23.884349] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:17.541 [2024-12-06 13:26:23.884373] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:17.541 [2024-12-06 13:26:23.884388] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:17.541 [2024-12-06 13:26:23.884401] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:17.541 [2024-12-06 13:26:23.884419] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:17.541 [2024-12-06 13:26:23.884431] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:17.541 [2024-12-06 13:26:23.884446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.541 [2024-12-06 13:26:23.884473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:17.541 [2024-12-06 13:26:23.884489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:32:17.541 [2024-12-06 13:26:23.884501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.541 [2024-12-06 13:26:23.884601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.541 [2024-12-06 13:26:23.884626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:17.541 [2024-12-06 13:26:23.884642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:32:17.541 [2024-12-06 13:26:23.884654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.541 [2024-12-06 13:26:23.884771] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:17.541 [2024-12-06 13:26:23.884787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:17.541 [2024-12-06 13:26:23.884802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:17.541 [2024-12-06 13:26:23.884815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.541 [2024-12-06 13:26:23.884829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:17.541 [2024-12-06 13:26:23.884856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:17.541 [2024-12-06 13:26:23.884873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:17.541 [2024-12-06 13:26:23.884885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:17.541 [2024-12-06 13:26:23.884898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:17.541 [2024-12-06 13:26:23.884909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:17.541 [2024-12-06 13:26:23.884928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:17.541 [2024-12-06 13:26:23.884940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:17.541 [2024-12-06 13:26:23.884955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:17.541 [2024-12-06 13:26:23.884967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:17.541 [2024-12-06 13:26:23.884980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:17.541 [2024-12-06 13:26:23.884990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.541 [2024-12-06 13:26:23.885005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:17.541 [2024-12-06 13:26:23.885016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:17.541 [2024-12-06 13:26:23.885029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.541 [2024-12-06 13:26:23.885040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:17.541 [2024-12-06 13:26:23.885053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:17.541 [2024-12-06 13:26:23.885064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.541 [2024-12-06 13:26:23.885077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:17.541 [2024-12-06 13:26:23.885088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:17.541 [2024-12-06 13:26:23.885100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.541 [2024-12-06 13:26:23.885111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:17.541 [2024-12-06 13:26:23.885124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:17.541 [2024-12-06 13:26:23.885134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.541 [2024-12-06 13:26:23.885147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:17.541 [2024-12-06 13:26:23.885158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:17.541 [2024-12-06 13:26:23.885170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.541 [2024-12-06 13:26:23.885181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:17.541 [2024-12-06 13:26:23.885196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:17.541 [2024-12-06 13:26:23.885206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:17.541 [2024-12-06 13:26:23.885219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:17.541 [2024-12-06 13:26:23.885230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:17.541 [2024-12-06 13:26:23.885242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:17.541 [2024-12-06 13:26:23.885253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:17.541 [2024-12-06 13:26:23.885268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:17.541 [2024-12-06 13:26:23.885279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.541 [2024-12-06 13:26:23.885292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:17.541 [2024-12-06 13:26:23.885303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:17.541 [2024-12-06 13:26:23.885316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.541 [2024-12-06 13:26:23.885326] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:17.541 [2024-12-06 13:26:23.885341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:17.541 [2024-12-06 13:26:23.885352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:17.541 [2024-12-06 13:26:23.885366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.541 [2024-12-06 13:26:23.885378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:17.541 [2024-12-06 13:26:23.885393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:17.541 [2024-12-06 13:26:23.885405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:17.541 [2024-12-06 13:26:23.885420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:17.541 [2024-12-06 13:26:23.885431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:17.541 [2024-12-06 13:26:23.885444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:17.542 [2024-12-06 13:26:23.885458] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:17.542 [2024-12-06 13:26:23.885478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:17.542 [2024-12-06 13:26:23.885491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:17.542 [2024-12-06 13:26:23.885505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:17.542 [2024-12-06 13:26:23.885518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:17.542 [2024-12-06 13:26:23.885531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:17.542 [2024-12-06 13:26:23.885543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:17.542 [2024-12-06 13:26:23.885556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:17.542 [2024-12-06 13:26:23.885568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:17.542 [2024-12-06 13:26:23.885582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:17.542 [2024-12-06 13:26:23.885593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:17.542 [2024-12-06 13:26:23.885611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:17.542 [2024-12-06 13:26:23.885623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:17.542 [2024-12-06 13:26:23.885637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:17.542 [2024-12-06 13:26:23.885648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:17.542 [2024-12-06 13:26:23.885662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:17.542 [2024-12-06 13:26:23.885674] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:17.542 [2024-12-06 13:26:23.885691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:17.542 [2024-12-06 13:26:23.885705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:17.542 [2024-12-06 13:26:23.885719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:17.542 [2024-12-06 13:26:23.885731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:17.542 [2024-12-06 13:26:23.885745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:17.542 [2024-12-06 13:26:23.885759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.542 [2024-12-06 13:26:23.885774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:17.542 [2024-12-06 13:26:23.885786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:32:17.542 [2024-12-06 13:26:23.885800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.542 [2024-12-06 13:26:23.885870] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:17.542 [2024-12-06 13:26:23.885900] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:19.439 [2024-12-06 13:26:25.932515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.439 [2024-12-06 13:26:25.932604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:19.439 [2024-12-06 13:26:25.932628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2046.657 ms 00:32:19.439 [2024-12-06 13:26:25.932644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.439 [2024-12-06 13:26:25.965557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.439 [2024-12-06 13:26:25.965625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:19.439 [2024-12-06 13:26:25.965648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.635 ms 00:32:19.439 [2024-12-06 13:26:25.965663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.439 [2024-12-06 13:26:25.965892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.439 [2024-12-06 13:26:25.965929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:19.439 [2024-12-06 13:26:25.965945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:32:19.439 [2024-12-06 13:26:25.965965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.697 [2024-12-06 13:26:26.007422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.697 [2024-12-06 13:26:26.007485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:19.697 [2024-12-06 13:26:26.007506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.395 ms 00:32:19.697 [2024-12-06 13:26:26.007523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.697 [2024-12-06 13:26:26.007580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.697 [2024-12-06 13:26:26.007602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:19.697 [2024-12-06 13:26:26.007633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:19.697 [2024-12-06 13:26:26.007672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.697 [2024-12-06 13:26:26.008113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.697 [2024-12-06 13:26:26.008150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:19.697 [2024-12-06 13:26:26.008166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:32:19.697 [2024-12-06 13:26:26.008180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.697 [2024-12-06 13:26:26.008325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.697 [2024-12-06 13:26:26.008352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:19.697 [2024-12-06 13:26:26.008369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:32:19.697 [2024-12-06 13:26:26.008385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.697 [2024-12-06 13:26:26.026444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.697 [2024-12-06 13:26:26.026511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:19.697 [2024-12-06 13:26:26.026532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.032 ms 00:32:19.697 [2024-12-06 13:26:26.026547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.697 [2024-12-06 13:26:26.054655] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:19.697 [2024-12-06 13:26:26.057492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.697 [2024-12-06 13:26:26.057534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:19.697 [2024-12-06 13:26:26.057557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.810 ms 00:32:19.697 [2024-12-06 13:26:26.057571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.697 [2024-12-06 13:26:26.115138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.697 [2024-12-06 13:26:26.115222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:19.698 [2024-12-06 13:26:26.115246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.501 ms 00:32:19.698 [2024-12-06 13:26:26.115259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.698 [2024-12-06 13:26:26.115495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.698 [2024-12-06 13:26:26.115518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:19.698 [2024-12-06 13:26:26.115553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:32:19.698 [2024-12-06 13:26:26.115565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.698 [2024-12-06 13:26:26.147572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.698 [2024-12-06 13:26:26.147628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:19.698 [2024-12-06 13:26:26.147663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.909 ms 00:32:19.698 [2024-12-06 13:26:26.147685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.698 [2024-12-06 13:26:26.180121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.698 [2024-12-06 13:26:26.180196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:19.698 [2024-12-06 13:26:26.180221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.347 ms 00:32:19.698 [2024-12-06 13:26:26.180234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.698 [2024-12-06 13:26:26.181000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.698 [2024-12-06 13:26:26.181035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:19.698 [2024-12-06 13:26:26.181053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:32:19.698 [2024-12-06 13:26:26.181069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.958 [2024-12-06 13:26:26.265954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.958 [2024-12-06 13:26:26.266033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:19.958 [2024-12-06 13:26:26.266062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.785 ms 00:32:19.958 [2024-12-06 13:26:26.266076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.958 [2024-12-06 13:26:26.299759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.958 [2024-12-06 13:26:26.299836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:19.958 [2024-12-06 13:26:26.299875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.538 ms 00:32:19.958 [2024-12-06 13:26:26.299889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.958 [2024-12-06 13:26:26.332011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.958 [2024-12-06 13:26:26.332080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:19.958 [2024-12-06 13:26:26.332105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.042 ms 00:32:19.958 [2024-12-06 13:26:26.332119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.958 [2024-12-06 13:26:26.364303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.958 [2024-12-06 13:26:26.364360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:19.958 [2024-12-06 13:26:26.364384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.123 ms 00:32:19.958 [2024-12-06 13:26:26.364397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.958 [2024-12-06 13:26:26.364460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.958 [2024-12-06 13:26:26.364479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:19.958 [2024-12-06 13:26:26.364497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:19.958 [2024-12-06 13:26:26.364509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.958 [2024-12-06 13:26:26.364636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.958 [2024-12-06 13:26:26.364658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:19.958 [2024-12-06 13:26:26.364674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:19.958 [2024-12-06 13:26:26.364685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.958 [2024-12-06 13:26:26.365807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2495.452 ms, result 0 00:32:19.958 { 00:32:19.958 "name": "ftl0", 00:32:19.958 "uuid": "df65a8bd-07a1-4165-8672-9faf2c9274d0" 00:32:19.958 } 00:32:19.958 13:26:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:32:19.958 13:26:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:20.228 13:26:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:32:20.228 13:26:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:32:20.228 13:26:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:32:20.794 /dev/nbd0 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:32:20.794 1+0 records in 00:32:20.794 1+0 records out 00:32:20.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279548 s, 14.7 MB/s 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:32:20.794 13:26:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:32:20.794 [2024-12-06 13:26:27.179917] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:32:20.794 [2024-12-06 13:26:27.180084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81603 ] 00:32:21.053 [2024-12-06 13:26:27.361167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.053 [2024-12-06 13:26:27.486377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.429  [2024-12-06T13:26:29.894Z] Copying: 164/1024 [MB] (164 MBps) [2024-12-06T13:26:31.269Z] Copying: 323/1024 [MB] (158 MBps) [2024-12-06T13:26:31.836Z] Copying: 490/1024 [MB] (166 MBps) [2024-12-06T13:26:33.210Z] Copying: 657/1024 [MB] (167 MBps) [2024-12-06T13:26:34.146Z] Copying: 822/1024 [MB] (165 MBps) [2024-12-06T13:26:34.146Z] Copying: 977/1024 [MB] (154 MBps) [2024-12-06T13:26:35.157Z] Copying: 1024/1024 [MB] (average 162 MBps) 00:32:28.629 00:32:28.629 13:26:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:31.159 13:26:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:32:31.159 [2024-12-06 13:26:37.405796] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:32:31.159 [2024-12-06 13:26:37.406001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81710 ] 00:32:31.159 [2024-12-06 13:26:37.578802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.159 [2024-12-06 13:26:37.680206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.535  [2024-12-06T13:26:39.997Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-06T13:26:41.374Z] Copying: 29/1024 [MB] (15 MBps) [2024-12-06T13:26:42.311Z] Copying: 44/1024 [MB] (14 MBps) [2024-12-06T13:26:43.286Z] Copying: 58/1024 [MB] (14 MBps) [2024-12-06T13:26:44.222Z] Copying: 73/1024 [MB] (14 MBps) [2024-12-06T13:26:45.156Z] Copying: 89/1024 [MB] (15 MBps) [2024-12-06T13:26:46.088Z] Copying: 104/1024 [MB] (15 MBps) [2024-12-06T13:26:47.020Z] Copying: 120/1024 [MB] (15 MBps) [2024-12-06T13:26:48.390Z] Copying: 136/1024 [MB] (15 MBps) [2024-12-06T13:26:49.327Z] Copying: 152/1024 [MB] (15 MBps) [2024-12-06T13:26:50.263Z] Copying: 168/1024 [MB] (15 MBps) [2024-12-06T13:26:51.198Z] Copying: 183/1024 [MB] (15 MBps) [2024-12-06T13:26:52.135Z] Copying: 199/1024 [MB] (16 MBps) [2024-12-06T13:26:53.070Z] Copying: 215/1024 [MB] (15 MBps) [2024-12-06T13:26:54.006Z] Copying: 231/1024 [MB] (15 MBps) [2024-12-06T13:26:55.381Z] Copying: 247/1024 [MB] (15 MBps) [2024-12-06T13:26:56.315Z] Copying: 262/1024 [MB] (15 MBps) [2024-12-06T13:26:57.253Z] Copying: 278/1024 [MB] (15 MBps) [2024-12-06T13:26:58.190Z] Copying: 293/1024 [MB] (15 MBps) [2024-12-06T13:26:59.127Z] Copying: 309/1024 [MB] (15 MBps) [2024-12-06T13:27:00.064Z] Copying: 324/1024 [MB] (15 MBps) [2024-12-06T13:27:00.999Z] Copying: 340/1024 [MB] (15 MBps) [2024-12-06T13:27:02.376Z] Copying: 355/1024 [MB] (15 MBps) [2024-12-06T13:27:03.312Z] Copying: 370/1024 [MB] (15 MBps) [2024-12-06T13:27:04.249Z] Copying: 387/1024 [MB] (16 MBps) [2024-12-06T13:27:05.184Z] Copying: 402/1024 [MB] (15 MBps) [2024-12-06T13:27:06.125Z] Copying: 418/1024 [MB] (15 MBps) [2024-12-06T13:27:07.133Z] Copying: 433/1024 [MB] (15 MBps) [2024-12-06T13:27:08.066Z] Copying: 449/1024 [MB] (15 MBps) [2024-12-06T13:27:08.998Z] Copying: 465/1024 [MB] (16 MBps) [2024-12-06T13:27:10.372Z] Copying: 481/1024 [MB] (15 MBps) [2024-12-06T13:27:11.308Z] Copying: 497/1024 [MB] (16 MBps) [2024-12-06T13:27:12.242Z] Copying: 515/1024 [MB] (17 MBps) [2024-12-06T13:27:13.175Z] Copying: 532/1024 [MB] (16 MBps) [2024-12-06T13:27:14.114Z] Copying: 548/1024 [MB] (16 MBps) [2024-12-06T13:27:15.068Z] Copying: 565/1024 [MB] (16 MBps) [2024-12-06T13:27:16.003Z] Copying: 579/1024 [MB] (14 MBps) [2024-12-06T13:27:17.379Z] Copying: 592/1024 [MB] (12 MBps) [2024-12-06T13:27:18.313Z] Copying: 607/1024 [MB] (14 MBps) [2024-12-06T13:27:19.249Z] Copying: 622/1024 [MB] (15 MBps) [2024-12-06T13:27:20.186Z] Copying: 636/1024 [MB] (13 MBps) [2024-12-06T13:27:21.124Z] Copying: 650/1024 [MB] (14 MBps) [2024-12-06T13:27:22.061Z] Copying: 666/1024 [MB] (15 MBps) [2024-12-06T13:27:22.995Z] Copying: 682/1024 [MB] (15 MBps) [2024-12-06T13:27:24.000Z] Copying: 696/1024 [MB] (14 MBps) [2024-12-06T13:27:25.373Z] Copying: 712/1024 [MB] (16 MBps) [2024-12-06T13:27:26.323Z] Copying: 728/1024 [MB] (15 MBps) [2024-12-06T13:27:27.257Z] Copying: 744/1024 [MB] (16 MBps) [2024-12-06T13:27:28.308Z] Copying: 760/1024 [MB] (15 MBps) [2024-12-06T13:27:29.244Z] Copying: 776/1024 [MB] (16 MBps) [2024-12-06T13:27:30.199Z] Copying: 793/1024 [MB] (16 MBps) [2024-12-06T13:27:31.135Z] Copying: 807/1024 [MB] (14 MBps) [2024-12-06T13:27:32.068Z] Copying: 823/1024 [MB] (15 MBps) [2024-12-06T13:27:33.002Z] Copying: 839/1024 [MB] (16 MBps) [2024-12-06T13:27:34.377Z] Copying: 856/1024 [MB] (16 MBps) [2024-12-06T13:27:35.313Z] Copying: 873/1024 [MB] (17 MBps) [2024-12-06T13:27:36.245Z] Copying: 886/1024 [MB] (12 MBps) [2024-12-06T13:27:37.179Z] Copying: 899/1024 [MB] (12 MBps) [2024-12-06T13:27:38.111Z] Copying: 911/1024 [MB] (12 MBps) [2024-12-06T13:27:39.046Z] Copying: 928/1024 [MB] (17 MBps) [2024-12-06T13:27:39.980Z] Copying: 944/1024 [MB] (15 MBps) [2024-12-06T13:27:41.354Z] Copying: 961/1024 [MB] (16 MBps) [2024-12-06T13:27:42.290Z] Copying: 978/1024 [MB] (17 MBps) [2024-12-06T13:27:42.967Z] Copying: 994/1024 [MB] (15 MBps) [2024-12-06T13:27:43.904Z] Copying: 1009/1024 [MB] (15 MBps) [2024-12-06T13:27:45.278Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:33:38.750 00:33:38.750 13:27:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:33:38.750 13:27:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:33:38.750 13:27:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:33:39.318 [2024-12-06 13:27:45.552595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.552900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:39.319 [2024-12-06 13:27:45.552946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:39.319 [2024-12-06 13:27:45.552964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.553018] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:39.319 [2024-12-06 13:27:45.556531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.556701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:39.319 [2024-12-06 13:27:45.556738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:33:39.319 [2024-12-06 13:27:45.556753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.558579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.558635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:39.319 [2024-12-06 13:27:45.558658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:33:39.319 [2024-12-06 13:27:45.558671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.576881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.576934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:39.319 [2024-12-06 13:27:45.576959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.170 ms 00:33:39.319 [2024-12-06 13:27:45.576973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.583836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.583888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:39.319 [2024-12-06 13:27:45.583910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.807 ms 00:33:39.319 [2024-12-06 13:27:45.583923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.616061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.616124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:39.319 [2024-12-06 13:27:45.616150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.015 ms 00:33:39.319 [2024-12-06 13:27:45.616163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.635592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.635800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:39.319 [2024-12-06 13:27:45.635866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.364 ms 00:33:39.319 [2024-12-06 13:27:45.635891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.636112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.636136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:39.319 [2024-12-06 13:27:45.636153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:33:39.319 [2024-12-06 13:27:45.636165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.668470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.668532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:39.319 [2024-12-06 13:27:45.668557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.270 ms 00:33:39.319 [2024-12-06 13:27:45.668570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.700314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.700372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:39.319 [2024-12-06 13:27:45.700397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.675 ms 00:33:39.319 [2024-12-06 13:27:45.700410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.731893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.731962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:39.319 [2024-12-06 13:27:45.731986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.399 ms 00:33:39.319 [2024-12-06 13:27:45.731999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.763691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.319 [2024-12-06 13:27:45.763797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:39.319 [2024-12-06 13:27:45.763827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.531 ms 00:33:39.319 [2024-12-06 13:27:45.763859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.319 [2024-12-06 13:27:45.763950] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:39.319 [2024-12-06 13:27:45.763977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.763995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:39.319 [2024-12-06 13:27:45.764504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.764996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:39.320 [2024-12-06 13:27:45.765435] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:39.320 [2024-12-06 13:27:45.765449] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df65a8bd-07a1-4165-8672-9faf2c9274d0 00:33:39.320 [2024-12-06 13:27:45.765462] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:39.320 [2024-12-06 13:27:45.765478] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:39.320 [2024-12-06 13:27:45.765493] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:39.320 [2024-12-06 13:27:45.765507] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:39.320 [2024-12-06 13:27:45.765518] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:39.320 [2024-12-06 13:27:45.765532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:39.320 [2024-12-06 13:27:45.765544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:39.320 [2024-12-06 13:27:45.765557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:39.320 [2024-12-06 13:27:45.765567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:39.320 [2024-12-06 13:27:45.765581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.320 [2024-12-06 13:27:45.765593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:39.320 [2024-12-06 13:27:45.765608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:33:39.320 [2024-12-06 13:27:45.765620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.320 [2024-12-06 13:27:45.783073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.320 [2024-12-06 13:27:45.783153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:39.320 [2024-12-06 13:27:45.783179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.336 ms 00:33:39.320 [2024-12-06 13:27:45.783192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.320 [2024-12-06 13:27:45.783689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.320 [2024-12-06 13:27:45.783714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:39.320 [2024-12-06 13:27:45.783732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:33:39.320 [2024-12-06 13:27:45.783744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.320 [2024-12-06 13:27:45.839808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.321 [2024-12-06 13:27:45.839895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:39.321 [2024-12-06 13:27:45.839920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.321 [2024-12-06 13:27:45.839932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.321 [2024-12-06 13:27:45.840025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.321 [2024-12-06 13:27:45.840041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:39.321 [2024-12-06 13:27:45.840056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.321 [2024-12-06 13:27:45.840069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.321 [2024-12-06 13:27:45.840235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.321 [2024-12-06 13:27:45.840259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:39.321 [2024-12-06 13:27:45.840275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.321 [2024-12-06 13:27:45.840287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.321 [2024-12-06 13:27:45.840320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.321 [2024-12-06 13:27:45.840334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:39.321 [2024-12-06 13:27:45.840347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.321 [2024-12-06 13:27:45.840359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.625 [2024-12-06 13:27:45.945544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.625 [2024-12-06 13:27:45.945621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:39.625 [2024-12-06 13:27:45.945648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.625 [2024-12-06 13:27:45.945660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.625 [2024-12-06 13:27:46.031111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.625 [2024-12-06 13:27:46.031185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:39.625 [2024-12-06 13:27:46.031225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.625 [2024-12-06 13:27:46.031238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.625 [2024-12-06 13:27:46.031379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.625 [2024-12-06 13:27:46.031400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:39.625 [2024-12-06 13:27:46.031419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.625 [2024-12-06 13:27:46.031431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.625 [2024-12-06 13:27:46.031506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.625 [2024-12-06 13:27:46.031525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:39.625 [2024-12-06 13:27:46.031540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.625 [2024-12-06 13:27:46.031552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.625 [2024-12-06 13:27:46.031696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.625 [2024-12-06 13:27:46.031717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:39.625 [2024-12-06 13:27:46.031733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.625 [2024-12-06 13:27:46.031748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.625 [2024-12-06 13:27:46.031814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.625 [2024-12-06 13:27:46.031834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:39.625 [2024-12-06 13:27:46.031882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.625 [2024-12-06 13:27:46.031909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.625 [2024-12-06 13:27:46.031963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.625 [2024-12-06 13:27:46.031979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:39.625 [2024-12-06 13:27:46.031994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.625 [2024-12-06 13:27:46.032008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.625 [2024-12-06 13:27:46.032079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.625 [2024-12-06 13:27:46.032098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:39.625 [2024-12-06 13:27:46.032112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.625 [2024-12-06 13:27:46.032124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.625 [2024-12-06 13:27:46.032299] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 479.657 ms, result 0 00:33:39.625 true 00:33:39.625 13:27:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81462 00:33:39.625 13:27:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81462 00:33:39.625 13:27:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:33:39.625 [2024-12-06 13:27:46.144286] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:33:39.625 [2024-12-06 13:27:46.144611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82377 ] 00:33:39.883 [2024-12-06 13:27:46.316882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.141 [2024-12-06 13:27:46.419982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.514  [2024-12-06T13:27:48.978Z] Copying: 165/1024 [MB] (165 MBps) [2024-12-06T13:27:49.914Z] Copying: 329/1024 [MB] (164 MBps) [2024-12-06T13:27:50.912Z] Copying: 490/1024 [MB] (160 MBps) [2024-12-06T13:27:51.846Z] Copying: 644/1024 [MB] (153 MBps) [2024-12-06T13:27:52.782Z] Copying: 809/1024 [MB] (165 MBps) [2024-12-06T13:27:53.041Z] Copying: 974/1024 [MB] (165 MBps) [2024-12-06T13:27:54.418Z] Copying: 1024/1024 [MB] (average 162 MBps) 00:33:47.890 00:33:47.890 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81462 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:33:47.890 13:27:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:47.890 [2024-12-06 13:27:54.150080] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:33:47.890 [2024-12-06 13:27:54.150471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82461 ] 00:33:47.890 [2024-12-06 13:27:54.331587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.149 [2024-12-06 13:27:54.434951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.407 [2024-12-06 13:27:54.763691] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:48.407 [2024-12-06 13:27:54.763777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:48.407 [2024-12-06 13:27:54.831476] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:48.407 [2024-12-06 13:27:54.831773] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:48.407 [2024-12-06 13:27:54.831981] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:48.671 [2024-12-06 13:27:55.078697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.078994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:48.671 [2024-12-06 13:27:55.079031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:48.671 [2024-12-06 13:27:55.079053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.079128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.079148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:48.671 [2024-12-06 13:27:55.079160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:33:48.671 [2024-12-06 13:27:55.079171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.079204] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:48.671 [2024-12-06 13:27:55.080324] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:48.671 [2024-12-06 13:27:55.080547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.080686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:48.671 [2024-12-06 13:27:55.080742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.348 ms 00:33:48.671 [2024-12-06 13:27:55.080899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.082157] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:48.671 [2024-12-06 13:27:55.098514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.098558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:48.671 [2024-12-06 13:27:55.098593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.359 ms 00:33:48.671 [2024-12-06 13:27:55.098604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.098692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.098711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:48.671 [2024-12-06 13:27:55.098739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:48.671 [2024-12-06 13:27:55.098765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.103474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.103518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:48.671 [2024-12-06 13:27:55.103551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.610 ms 00:33:48.671 [2024-12-06 13:27:55.103562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.103654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.103700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:48.671 [2024-12-06 13:27:55.103713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:48.671 [2024-12-06 13:27:55.103724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.103786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.103804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:48.671 [2024-12-06 13:27:55.103816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:48.671 [2024-12-06 13:27:55.103827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.103875] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:48.671 [2024-12-06 13:27:55.108341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.108378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:48.671 [2024-12-06 13:27:55.108411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.490 ms 00:33:48.671 [2024-12-06 13:27:55.108422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.108464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.108479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:48.671 [2024-12-06 13:27:55.108491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:48.671 [2024-12-06 13:27:55.108501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.108553] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:48.671 [2024-12-06 13:27:55.108584] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:48.671 [2024-12-06 13:27:55.108625] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:48.671 [2024-12-06 13:27:55.108645] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:48.671 [2024-12-06 13:27:55.108766] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:48.671 [2024-12-06 13:27:55.108781] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:48.671 [2024-12-06 13:27:55.108794] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:48.671 [2024-12-06 13:27:55.108812] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:48.671 [2024-12-06 13:27:55.108824] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:48.671 [2024-12-06 13:27:55.108836] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:48.671 [2024-12-06 13:27:55.108845] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:48.671 [2024-12-06 13:27:55.108866] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:48.671 [2024-12-06 13:27:55.108917] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:48.671 [2024-12-06 13:27:55.108929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.108940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:48.671 [2024-12-06 13:27:55.108952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:33:48.671 [2024-12-06 13:27:55.108963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.109080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.671 [2024-12-06 13:27:55.109102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:48.671 [2024-12-06 13:27:55.109116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:33:48.671 [2024-12-06 13:27:55.109127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.671 [2024-12-06 13:27:55.109270] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:48.671 [2024-12-06 13:27:55.109291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:48.671 [2024-12-06 13:27:55.109303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:48.671 [2024-12-06 13:27:55.109314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.671 [2024-12-06 13:27:55.109326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:48.671 [2024-12-06 13:27:55.109336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:48.671 [2024-12-06 13:27:55.109347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:48.671 [2024-12-06 13:27:55.109357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:48.671 [2024-12-06 13:27:55.109367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:48.671 [2024-12-06 13:27:55.109389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:48.671 [2024-12-06 13:27:55.109400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:48.671 [2024-12-06 13:27:55.109410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:48.671 [2024-12-06 13:27:55.109420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:48.671 [2024-12-06 13:27:55.109430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:48.671 [2024-12-06 13:27:55.109440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:48.671 [2024-12-06 13:27:55.109452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.671 [2024-12-06 13:27:55.109462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:48.671 [2024-12-06 13:27:55.109473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:48.671 [2024-12-06 13:27:55.109482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.671 [2024-12-06 13:27:55.109493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:48.671 [2024-12-06 13:27:55.109502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:48.671 [2024-12-06 13:27:55.109513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:48.671 [2024-12-06 13:27:55.109522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:48.671 [2024-12-06 13:27:55.109532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:48.671 [2024-12-06 13:27:55.109542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:48.671 [2024-12-06 13:27:55.109552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:48.671 [2024-12-06 13:27:55.109562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:48.671 [2024-12-06 13:27:55.109571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:48.671 [2024-12-06 13:27:55.109581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:48.671 [2024-12-06 13:27:55.109591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:48.671 [2024-12-06 13:27:55.109600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:48.672 [2024-12-06 13:27:55.109610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:48.672 [2024-12-06 13:27:55.109620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:48.672 [2024-12-06 13:27:55.109629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:48.672 [2024-12-06 13:27:55.109639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:48.672 [2024-12-06 13:27:55.109649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:48.672 [2024-12-06 13:27:55.109658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:48.672 [2024-12-06 13:27:55.109668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:48.672 [2024-12-06 13:27:55.109678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:48.672 [2024-12-06 13:27:55.109687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.672 [2024-12-06 13:27:55.109697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:48.672 [2024-12-06 13:27:55.109707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:48.672 [2024-12-06 13:27:55.109717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.672 [2024-12-06 13:27:55.109726] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:48.672 [2024-12-06 13:27:55.109737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:48.672 [2024-12-06 13:27:55.109752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:48.672 [2024-12-06 13:27:55.109763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.672 [2024-12-06 13:27:55.109775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:48.672 [2024-12-06 13:27:55.109786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:48.672 [2024-12-06 13:27:55.109796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:48.672 [2024-12-06 13:27:55.109806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:48.672 [2024-12-06 13:27:55.109816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:48.672 [2024-12-06 13:27:55.109825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:48.672 [2024-12-06 13:27:55.109837] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:48.672 [2024-12-06 13:27:55.109850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:48.672 [2024-12-06 13:27:55.109862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:48.672 [2024-12-06 13:27:55.109890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:48.672 [2024-12-06 13:27:55.109904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:48.672 [2024-12-06 13:27:55.109914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:48.672 [2024-12-06 13:27:55.109925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:48.672 [2024-12-06 13:27:55.109935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:48.672 [2024-12-06 13:27:55.109946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:48.672 [2024-12-06 13:27:55.109957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:48.672 [2024-12-06 13:27:55.109968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:48.672 [2024-12-06 13:27:55.109979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:48.672 [2024-12-06 13:27:55.109990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:48.672 [2024-12-06 13:27:55.110001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:48.672 [2024-12-06 13:27:55.110012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:48.672 [2024-12-06 13:27:55.110023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:48.672 [2024-12-06 13:27:55.110034] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:48.672 [2024-12-06 13:27:55.110046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:48.672 [2024-12-06 13:27:55.110058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:48.672 [2024-12-06 13:27:55.110069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:48.672 [2024-12-06 13:27:55.110081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:48.672 [2024-12-06 13:27:55.110092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:48.672 [2024-12-06 13:27:55.110104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.672 [2024-12-06 13:27:55.110115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:48.672 [2024-12-06 13:27:55.110126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:33:48.672 [2024-12-06 13:27:55.110137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.672 [2024-12-06 13:27:55.143883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.672 [2024-12-06 13:27:55.143950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:48.672 [2024-12-06 13:27:55.143972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.675 ms 00:33:48.672 [2024-12-06 13:27:55.143983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.672 [2024-12-06 13:27:55.144122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.672 [2024-12-06 13:27:55.144139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:48.672 [2024-12-06 13:27:55.144151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:33:48.672 [2024-12-06 13:27:55.144161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.672 [2024-12-06 13:27:55.192388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.672 [2024-12-06 13:27:55.192653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:48.672 [2024-12-06 13:27:55.192694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.128 ms 00:33:48.672 [2024-12-06 13:27:55.192709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.672 [2024-12-06 13:27:55.192788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.672 [2024-12-06 13:27:55.192808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:48.672 [2024-12-06 13:27:55.192821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:48.672 [2024-12-06 13:27:55.192833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.672 [2024-12-06 13:27:55.193285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.672 [2024-12-06 13:27:55.193306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:48.672 [2024-12-06 13:27:55.193319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:33:48.672 [2024-12-06 13:27:55.193339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.672 [2024-12-06 13:27:55.193497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.672 [2024-12-06 13:27:55.193516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:48.672 [2024-12-06 13:27:55.193528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:33:48.672 [2024-12-06 13:27:55.193540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.210313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.210583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:48.931 [2024-12-06 13:27:55.210616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.742 ms 00:33:48.931 [2024-12-06 13:27:55.210631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.227436] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:48.931 [2024-12-06 13:27:55.227517] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:48.931 [2024-12-06 13:27:55.227540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.227552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:48.931 [2024-12-06 13:27:55.227566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.714 ms 00:33:48.931 [2024-12-06 13:27:55.227577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.258132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.258208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:48.931 [2024-12-06 13:27:55.258229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.482 ms 00:33:48.931 [2024-12-06 13:27:55.258240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.274724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.274792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:48.931 [2024-12-06 13:27:55.274813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.372 ms 00:33:48.931 [2024-12-06 13:27:55.274825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.290787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.290858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:48.931 [2024-12-06 13:27:55.290879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.852 ms 00:33:48.931 [2024-12-06 13:27:55.290890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.291760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.291791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:48.931 [2024-12-06 13:27:55.291805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:33:48.931 [2024-12-06 13:27:55.291817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.365891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.365970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:48.931 [2024-12-06 13:27:55.365992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.027 ms 00:33:48.931 [2024-12-06 13:27:55.366003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.379340] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:48.931 [2024-12-06 13:27:55.382193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.382395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:48.931 [2024-12-06 13:27:55.382428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.106 ms 00:33:48.931 [2024-12-06 13:27:55.382450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.382588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.382610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:48.931 [2024-12-06 13:27:55.382623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:48.931 [2024-12-06 13:27:55.382634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.382728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.382746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:48.931 [2024-12-06 13:27:55.382759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:48.931 [2024-12-06 13:27:55.382770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.382808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.931 [2024-12-06 13:27:55.382823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:48.931 [2024-12-06 13:27:55.382835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:48.931 [2024-12-06 13:27:55.382872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.931 [2024-12-06 13:27:55.382918] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:48.931 [2024-12-06 13:27:55.382935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.932 [2024-12-06 13:27:55.382946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:48.932 [2024-12-06 13:27:55.382958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:48.932 [2024-12-06 13:27:55.382974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.932 [2024-12-06 13:27:55.414503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.932 [2024-12-06 13:27:55.414572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:48.932 [2024-12-06 13:27:55.414594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.497 ms 00:33:48.932 [2024-12-06 13:27:55.414605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.932 [2024-12-06 13:27:55.414711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.932 [2024-12-06 13:27:55.414730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:48.932 [2024-12-06 13:27:55.414743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:48.932 [2024-12-06 13:27:55.414755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.932 [2024-12-06 13:27:55.416195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.945 ms, result 0 00:33:50.306  [2024-12-06T13:27:57.769Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-06T13:27:58.703Z] Copying: 54/1024 [MB] (27 MBps) [2024-12-06T13:27:59.638Z] Copying: 80/1024 [MB] (26 MBps) [2024-12-06T13:28:00.573Z] Copying: 106/1024 [MB] (25 MBps) [2024-12-06T13:28:01.507Z] Copying: 131/1024 [MB] (25 MBps) [2024-12-06T13:28:02.494Z] Copying: 156/1024 [MB] (25 MBps) [2024-12-06T13:28:03.866Z] Copying: 184/1024 [MB] (27 MBps) [2024-12-06T13:28:04.432Z] Copying: 211/1024 [MB] (27 MBps) [2024-12-06T13:28:05.805Z] Copying: 238/1024 [MB] (26 MBps) [2024-12-06T13:28:06.739Z] Copying: 264/1024 [MB] (26 MBps) [2024-12-06T13:28:07.676Z] Copying: 291/1024 [MB] (26 MBps) [2024-12-06T13:28:08.614Z] Copying: 318/1024 [MB] (27 MBps) [2024-12-06T13:28:09.549Z] Copying: 345/1024 [MB] (26 MBps) [2024-12-06T13:28:10.487Z] Copying: 373/1024 [MB] (27 MBps) [2024-12-06T13:28:11.865Z] Copying: 400/1024 [MB] (27 MBps) [2024-12-06T13:28:12.433Z] Copying: 429/1024 [MB] (28 MBps) [2024-12-06T13:28:13.815Z] Copying: 456/1024 [MB] (27 MBps) [2024-12-06T13:28:14.754Z] Copying: 485/1024 [MB] (28 MBps) [2024-12-06T13:28:15.687Z] Copying: 511/1024 [MB] (26 MBps) [2024-12-06T13:28:16.623Z] Copying: 540/1024 [MB] (29 MBps) [2024-12-06T13:28:17.558Z] Copying: 568/1024 [MB] (27 MBps) [2024-12-06T13:28:18.496Z] Copying: 593/1024 [MB] (24 MBps) [2024-12-06T13:28:19.429Z] Copying: 621/1024 [MB] (27 MBps) [2024-12-06T13:28:20.802Z] Copying: 647/1024 [MB] (25 MBps) [2024-12-06T13:28:21.737Z] Copying: 676/1024 [MB] (29 MBps) [2024-12-06T13:28:22.672Z] Copying: 704/1024 [MB] (27 MBps) [2024-12-06T13:28:23.681Z] Copying: 733/1024 [MB] (28 MBps) [2024-12-06T13:28:24.616Z] Copying: 761/1024 [MB] (28 MBps) [2024-12-06T13:28:25.552Z] Copying: 788/1024 [MB] (27 MBps) [2024-12-06T13:28:26.485Z] Copying: 814/1024 [MB] (25 MBps) [2024-12-06T13:28:27.858Z] Copying: 840/1024 [MB] (26 MBps) [2024-12-06T13:28:28.791Z] Copying: 869/1024 [MB] (29 MBps) [2024-12-06T13:28:29.724Z] Copying: 898/1024 [MB] (28 MBps) [2024-12-06T13:28:30.662Z] Copying: 924/1024 [MB] (26 MBps) [2024-12-06T13:28:31.594Z] Copying: 951/1024 [MB] (26 MBps) [2024-12-06T13:28:32.529Z] Copying: 980/1024 [MB] (28 MBps) [2024-12-06T13:28:33.466Z] Copying: 1008/1024 [MB] (28 MBps) [2024-12-06T13:28:34.401Z] Copying: 1023/1024 [MB] (15 MBps) [2024-12-06T13:28:34.401Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-06 13:28:34.229675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.873 [2024-12-06 13:28:34.229770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:27.873 [2024-12-06 13:28:34.229795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:34:27.873 [2024-12-06 13:28:34.229808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.873 [2024-12-06 13:28:34.233566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:27.873 [2024-12-06 13:28:34.240576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.873 [2024-12-06 13:28:34.240737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:27.873 [2024-12-06 13:28:34.240888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.953 ms 00:34:27.873 [2024-12-06 13:28:34.240924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.873 [2024-12-06 13:28:34.253416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.873 [2024-12-06 13:28:34.253469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:27.873 [2024-12-06 13:28:34.253490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.151 ms 00:34:27.873 [2024-12-06 13:28:34.253503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.873 [2024-12-06 13:28:34.276467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.873 [2024-12-06 13:28:34.276673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:27.873 [2024-12-06 13:28:34.276704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.940 ms 00:34:27.873 [2024-12-06 13:28:34.276719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.873 [2024-12-06 13:28:34.283457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.873 [2024-12-06 13:28:34.283607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:27.873 [2024-12-06 13:28:34.283639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.677 ms 00:34:27.873 [2024-12-06 13:28:34.283652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.873 [2024-12-06 13:28:34.315227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.873 [2024-12-06 13:28:34.315410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:27.873 [2024-12-06 13:28:34.315534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.502 ms 00:34:27.873 [2024-12-06 13:28:34.315583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.873 [2024-12-06 13:28:34.333531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.873 [2024-12-06 13:28:34.333750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:27.873 [2024-12-06 13:28:34.333894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.872 ms 00:34:27.873 [2024-12-06 13:28:34.333947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.133 [2024-12-06 13:28:34.428560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.133 [2024-12-06 13:28:34.428807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:28.133 [2024-12-06 13:28:34.428955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.456 ms 00:34:28.133 [2024-12-06 13:28:34.429006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.133 [2024-12-06 13:28:34.461309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.133 [2024-12-06 13:28:34.461517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:28.133 [2024-12-06 13:28:34.461649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.243 ms 00:34:28.133 [2024-12-06 13:28:34.461717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.133 [2024-12-06 13:28:34.493055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.133 [2024-12-06 13:28:34.493263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:28.133 [2024-12-06 13:28:34.493388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.259 ms 00:34:28.133 [2024-12-06 13:28:34.493436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.133 [2024-12-06 13:28:34.524263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.133 [2024-12-06 13:28:34.524447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:28.133 [2024-12-06 13:28:34.524578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.746 ms 00:34:28.133 [2024-12-06 13:28:34.524631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.133 [2024-12-06 13:28:34.555687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.133 [2024-12-06 13:28:34.555889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:28.133 [2024-12-06 13:28:34.556008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.920 ms 00:34:28.133 [2024-12-06 13:28:34.556059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.133 [2024-12-06 13:28:34.556207] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:28.133 [2024-12-06 13:28:34.556277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:34:28.133 [2024-12-06 13:28:34.556403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.556466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.556583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.556851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.556923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.557998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:28.133 [2024-12-06 13:28:34.558352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:28.134 [2024-12-06 13:28:34.558766] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:28.134 [2024-12-06 13:28:34.558777] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df65a8bd-07a1-4165-8672-9faf2c9274d0 00:34:28.134 [2024-12-06 13:28:34.558807] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:34:28.134 [2024-12-06 13:28:34.558819] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130496 00:34:28.134 [2024-12-06 13:28:34.558829] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:34:28.134 [2024-12-06 13:28:34.558853] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:34:28.134 [2024-12-06 13:28:34.558866] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:28.134 [2024-12-06 13:28:34.558877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:28.134 [2024-12-06 13:28:34.558896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:28.134 [2024-12-06 13:28:34.558911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:28.134 [2024-12-06 13:28:34.558921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:28.134 [2024-12-06 13:28:34.558933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.134 [2024-12-06 13:28:34.558945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:28.134 [2024-12-06 13:28:34.558956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.729 ms 00:34:28.134 [2024-12-06 13:28:34.558967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.134 [2024-12-06 13:28:34.575620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.134 [2024-12-06 13:28:34.575666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:28.134 [2024-12-06 13:28:34.575684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.575 ms 00:34:28.134 [2024-12-06 13:28:34.575705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.134 [2024-12-06 13:28:34.576166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.134 [2024-12-06 13:28:34.576197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:28.134 [2024-12-06 13:28:34.576230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:34:28.134 [2024-12-06 13:28:34.576242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.134 [2024-12-06 13:28:34.619383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.134 [2024-12-06 13:28:34.619595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:28.134 [2024-12-06 13:28:34.619626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.134 [2024-12-06 13:28:34.619638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.134 [2024-12-06 13:28:34.619736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.134 [2024-12-06 13:28:34.619755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:28.134 [2024-12-06 13:28:34.619775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.134 [2024-12-06 13:28:34.619786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.134 [2024-12-06 13:28:34.619915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.134 [2024-12-06 13:28:34.619936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:28.134 [2024-12-06 13:28:34.619949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.134 [2024-12-06 13:28:34.619960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.134 [2024-12-06 13:28:34.619984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.134 [2024-12-06 13:28:34.620003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:28.134 [2024-12-06 13:28:34.620014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.134 [2024-12-06 13:28:34.620025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.393 [2024-12-06 13:28:34.725182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.393 [2024-12-06 13:28:34.725241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:28.393 [2024-12-06 13:28:34.725261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.393 [2024-12-06 13:28:34.725272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.393 [2024-12-06 13:28:34.809952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.393 [2024-12-06 13:28:34.810021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:28.393 [2024-12-06 13:28:34.810044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.393 [2024-12-06 13:28:34.810063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.393 [2024-12-06 13:28:34.810165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.393 [2024-12-06 13:28:34.810183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:28.393 [2024-12-06 13:28:34.810196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.393 [2024-12-06 13:28:34.810207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.393 [2024-12-06 13:28:34.810256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.393 [2024-12-06 13:28:34.810271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:28.393 [2024-12-06 13:28:34.810282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.393 [2024-12-06 13:28:34.810293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.393 [2024-12-06 13:28:34.810427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.393 [2024-12-06 13:28:34.810448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:28.393 [2024-12-06 13:28:34.810460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.393 [2024-12-06 13:28:34.810471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.393 [2024-12-06 13:28:34.810521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.393 [2024-12-06 13:28:34.810538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:28.393 [2024-12-06 13:28:34.810550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.393 [2024-12-06 13:28:34.810560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.394 [2024-12-06 13:28:34.810610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.394 [2024-12-06 13:28:34.810626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:28.394 [2024-12-06 13:28:34.810637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.394 [2024-12-06 13:28:34.810648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.394 [2024-12-06 13:28:34.810697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:28.394 [2024-12-06 13:28:34.810713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:28.394 [2024-12-06 13:28:34.810724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:28.394 [2024-12-06 13:28:34.810734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.394 [2024-12-06 13:28:34.810945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 583.462 ms, result 0 00:34:29.769 00:34:29.770 00:34:29.770 13:28:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:34:32.302 13:28:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:32.302 [2024-12-06 13:28:38.658067] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:34:32.302 [2024-12-06 13:28:38.658242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82885 ] 00:34:32.561 [2024-12-06 13:28:38.842385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.561 [2024-12-06 13:28:38.970250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.819 [2024-12-06 13:28:39.323445] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:32.819 [2024-12-06 13:28:39.323527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:33.078 [2024-12-06 13:28:39.486352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.078 [2024-12-06 13:28:39.486422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:33.078 [2024-12-06 13:28:39.486453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:33.078 [2024-12-06 13:28:39.486466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.078 [2024-12-06 13:28:39.486533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.078 [2024-12-06 13:28:39.486555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:33.078 [2024-12-06 13:28:39.486569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:34:33.078 [2024-12-06 13:28:39.486581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.078 [2024-12-06 13:28:39.486614] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:33.078 [2024-12-06 13:28:39.487648] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:33.078 [2024-12-06 13:28:39.487690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.078 [2024-12-06 13:28:39.487714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:33.078 [2024-12-06 13:28:39.487729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:34:33.078 [2024-12-06 13:28:39.487741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.078 [2024-12-06 13:28:39.488969] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:33.078 [2024-12-06 13:28:39.506276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.078 [2024-12-06 13:28:39.506325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:33.078 [2024-12-06 13:28:39.506345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.308 ms 00:34:33.079 [2024-12-06 13:28:39.506373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.079 [2024-12-06 13:28:39.506507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.079 [2024-12-06 13:28:39.506526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:33.079 [2024-12-06 13:28:39.506539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:34:33.079 [2024-12-06 13:28:39.506550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.079 [2024-12-06 13:28:39.511209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.079 [2024-12-06 13:28:39.511287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:33.079 [2024-12-06 13:28:39.511319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.530 ms 00:34:33.079 [2024-12-06 13:28:39.511338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.079 [2024-12-06 13:28:39.511456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.079 [2024-12-06 13:28:39.511475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:33.079 [2024-12-06 13:28:39.511489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:34:33.079 [2024-12-06 13:28:39.511501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.079 [2024-12-06 13:28:39.511569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.079 [2024-12-06 13:28:39.511587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:33.079 [2024-12-06 13:28:39.511600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:33.079 [2024-12-06 13:28:39.511613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.079 [2024-12-06 13:28:39.511654] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:33.079 [2024-12-06 13:28:39.516237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.079 [2024-12-06 13:28:39.516275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:33.079 [2024-12-06 13:28:39.516312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.594 ms 00:34:33.079 [2024-12-06 13:28:39.516340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.079 [2024-12-06 13:28:39.516391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.079 [2024-12-06 13:28:39.516409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:33.079 [2024-12-06 13:28:39.516422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:33.079 [2024-12-06 13:28:39.516434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.079 [2024-12-06 13:28:39.516484] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:33.079 [2024-12-06 13:28:39.516517] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:33.079 [2024-12-06 13:28:39.516561] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:33.079 [2024-12-06 13:28:39.516586] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:33.079 [2024-12-06 13:28:39.516698] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:33.079 [2024-12-06 13:28:39.516713] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:33.079 [2024-12-06 13:28:39.516760] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:33.079 [2024-12-06 13:28:39.516791] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:33.079 [2024-12-06 13:28:39.516805] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:33.079 [2024-12-06 13:28:39.516818] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:33.079 [2024-12-06 13:28:39.516830] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:33.079 [2024-12-06 13:28:39.516846] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:33.079 [2024-12-06 13:28:39.516858] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:33.079 [2024-12-06 13:28:39.516870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.079 [2024-12-06 13:28:39.516882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:33.079 [2024-12-06 13:28:39.516923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:34:33.079 [2024-12-06 13:28:39.516936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.079 [2024-12-06 13:28:39.517062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.079 [2024-12-06 13:28:39.517080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:33.079 [2024-12-06 13:28:39.517093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:34:33.079 [2024-12-06 13:28:39.517105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.079 [2024-12-06 13:28:39.517230] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:33.079 [2024-12-06 13:28:39.517267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:33.079 [2024-12-06 13:28:39.517295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:33.079 [2024-12-06 13:28:39.517307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:33.079 [2024-12-06 13:28:39.517346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:33.079 [2024-12-06 13:28:39.517370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:33.079 [2024-12-06 13:28:39.517382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:33.079 [2024-12-06 13:28:39.517404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:33.079 [2024-12-06 13:28:39.517415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:33.079 [2024-12-06 13:28:39.517426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:33.079 [2024-12-06 13:28:39.517451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:33.079 [2024-12-06 13:28:39.517463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:33.079 [2024-12-06 13:28:39.517474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:33.079 [2024-12-06 13:28:39.517496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:33.079 [2024-12-06 13:28:39.517512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:33.079 [2024-12-06 13:28:39.517535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:33.079 [2024-12-06 13:28:39.517558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:33.079 [2024-12-06 13:28:39.517569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:33.079 [2024-12-06 13:28:39.517592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:33.079 [2024-12-06 13:28:39.517603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:33.079 [2024-12-06 13:28:39.517626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:33.079 [2024-12-06 13:28:39.517637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:33.079 [2024-12-06 13:28:39.517660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:33.079 [2024-12-06 13:28:39.517671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:33.079 [2024-12-06 13:28:39.517693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:33.079 [2024-12-06 13:28:39.517704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:33.079 [2024-12-06 13:28:39.517715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:33.079 [2024-12-06 13:28:39.517727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:33.079 [2024-12-06 13:28:39.517738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:33.079 [2024-12-06 13:28:39.517749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:33.079 [2024-12-06 13:28:39.517771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:33.079 [2024-12-06 13:28:39.517784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517796] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:33.079 [2024-12-06 13:28:39.517808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:33.079 [2024-12-06 13:28:39.517820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:33.079 [2024-12-06 13:28:39.517831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:33.079 [2024-12-06 13:28:39.517844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:33.079 [2024-12-06 13:28:39.517855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:33.079 [2024-12-06 13:28:39.517866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:33.079 [2024-12-06 13:28:39.517878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:33.079 [2024-12-06 13:28:39.518259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:33.079 [2024-12-06 13:28:39.518306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:33.079 [2024-12-06 13:28:39.518347] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:33.079 [2024-12-06 13:28:39.518543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:33.080 [2024-12-06 13:28:39.518760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:33.080 [2024-12-06 13:28:39.518823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:33.080 [2024-12-06 13:28:39.518983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:33.080 [2024-12-06 13:28:39.519041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:33.080 [2024-12-06 13:28:39.519172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:33.080 [2024-12-06 13:28:39.519384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:33.080 [2024-12-06 13:28:39.519445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:33.080 [2024-12-06 13:28:39.519567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:33.080 [2024-12-06 13:28:39.519634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:33.080 [2024-12-06 13:28:39.519689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:33.080 [2024-12-06 13:28:39.519831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:33.080 [2024-12-06 13:28:39.519912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:33.080 [2024-12-06 13:28:39.519967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:33.080 [2024-12-06 13:28:39.520074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:33.080 [2024-12-06 13:28:39.520088] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:33.080 [2024-12-06 13:28:39.520102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:33.080 [2024-12-06 13:28:39.520116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:33.080 [2024-12-06 13:28:39.520128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:33.080 [2024-12-06 13:28:39.520140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:33.080 [2024-12-06 13:28:39.520152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:33.080 [2024-12-06 13:28:39.520167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.080 [2024-12-06 13:28:39.520180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:33.080 [2024-12-06 13:28:39.520193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.007 ms 00:34:33.080 [2024-12-06 13:28:39.520205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.080 [2024-12-06 13:28:39.555243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.080 [2024-12-06 13:28:39.555340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:33.080 [2024-12-06 13:28:39.555360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.948 ms 00:34:33.080 [2024-12-06 13:28:39.555378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.080 [2024-12-06 13:28:39.555508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.080 [2024-12-06 13:28:39.555524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:33.080 [2024-12-06 13:28:39.555538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:34:33.080 [2024-12-06 13:28:39.555550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.338 [2024-12-06 13:28:39.614815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.338 [2024-12-06 13:28:39.614894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:33.338 [2024-12-06 13:28:39.614919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.149 ms 00:34:33.339 [2024-12-06 13:28:39.614933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.615012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.615029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:33.339 [2024-12-06 13:28:39.615050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:33.339 [2024-12-06 13:28:39.615063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.615458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.615484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:33.339 [2024-12-06 13:28:39.615499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:34:33.339 [2024-12-06 13:28:39.615512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.615670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.615690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:33.339 [2024-12-06 13:28:39.615723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:34:33.339 [2024-12-06 13:28:39.615736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.632527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.632733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:33.339 [2024-12-06 13:28:39.632764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.762 ms 00:34:33.339 [2024-12-06 13:28:39.632779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.650043] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:33.339 [2024-12-06 13:28:39.650207] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:33.339 [2024-12-06 13:28:39.650233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.650248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:33.339 [2024-12-06 13:28:39.650264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.255 ms 00:34:33.339 [2024-12-06 13:28:39.650276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.682575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.682618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:33.339 [2024-12-06 13:28:39.682637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.233 ms 00:34:33.339 [2024-12-06 13:28:39.682649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.699499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.699547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:33.339 [2024-12-06 13:28:39.699581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.803 ms 00:34:33.339 [2024-12-06 13:28:39.699611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.716526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.716683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:33.339 [2024-12-06 13:28:39.716713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.866 ms 00:34:33.339 [2024-12-06 13:28:39.716726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.717572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.717605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:33.339 [2024-12-06 13:28:39.717626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:34:33.339 [2024-12-06 13:28:39.717639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.794518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.794595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:33.339 [2024-12-06 13:28:39.794626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.846 ms 00:34:33.339 [2024-12-06 13:28:39.794639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.807877] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:33.339 [2024-12-06 13:28:39.810724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.810764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:33.339 [2024-12-06 13:28:39.810784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.008 ms 00:34:33.339 [2024-12-06 13:28:39.810797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.810934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.810956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:33.339 [2024-12-06 13:28:39.810974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:33.339 [2024-12-06 13:28:39.810987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.812658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.812696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:33.339 [2024-12-06 13:28:39.812713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:34:33.339 [2024-12-06 13:28:39.812725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.812765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.812782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:33.339 [2024-12-06 13:28:39.812795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:33.339 [2024-12-06 13:28:39.812808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.812871] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:33.339 [2024-12-06 13:28:39.812890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.812902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:33.339 [2024-12-06 13:28:39.812915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:33.339 [2024-12-06 13:28:39.812926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.845317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.845508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:33.339 [2024-12-06 13:28:39.845562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.360 ms 00:34:33.339 [2024-12-06 13:28:39.845577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.845663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:33.339 [2024-12-06 13:28:39.845682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:33.339 [2024-12-06 13:28:39.845695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:33.339 [2024-12-06 13:28:39.845707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:33.339 [2024-12-06 13:28:39.846950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.067 ms, result 0 00:34:34.714  [2024-12-06T13:28:42.237Z] Copying: 792/1048576 [kB] (792 kBps) [2024-12-06T13:28:43.211Z] Copying: 3812/1048576 [kB] (3020 kBps) [2024-12-06T13:28:44.146Z] Copying: 20/1024 [MB] (16 MBps) [2024-12-06T13:28:45.081Z] Copying: 49/1024 [MB] (28 MBps) [2024-12-06T13:28:46.456Z] Copying: 78/1024 [MB] (28 MBps) [2024-12-06T13:28:47.391Z] Copying: 107/1024 [MB] (28 MBps) [2024-12-06T13:28:48.327Z] Copying: 136/1024 [MB] (29 MBps) [2024-12-06T13:28:49.262Z] Copying: 165/1024 [MB] (28 MBps) [2024-12-06T13:28:50.198Z] Copying: 195/1024 [MB] (29 MBps) [2024-12-06T13:28:51.130Z] Copying: 223/1024 [MB] (28 MBps) [2024-12-06T13:28:52.504Z] Copying: 253/1024 [MB] (29 MBps) [2024-12-06T13:28:53.467Z] Copying: 282/1024 [MB] (29 MBps) [2024-12-06T13:28:54.403Z] Copying: 310/1024 [MB] (28 MBps) [2024-12-06T13:28:55.339Z] Copying: 339/1024 [MB] (29 MBps) [2024-12-06T13:28:56.272Z] Copying: 368/1024 [MB] (28 MBps) [2024-12-06T13:28:57.206Z] Copying: 394/1024 [MB] (26 MBps) [2024-12-06T13:28:58.142Z] Copying: 421/1024 [MB] (27 MBps) [2024-12-06T13:28:59.078Z] Copying: 450/1024 [MB] (28 MBps) [2024-12-06T13:29:00.452Z] Copying: 477/1024 [MB] (27 MBps) [2024-12-06T13:29:01.388Z] Copying: 506/1024 [MB] (28 MBps) [2024-12-06T13:29:02.324Z] Copying: 530/1024 [MB] (24 MBps) [2024-12-06T13:29:03.260Z] Copying: 557/1024 [MB] (26 MBps) [2024-12-06T13:29:04.196Z] Copying: 585/1024 [MB] (28 MBps) [2024-12-06T13:29:05.134Z] Copying: 613/1024 [MB] (27 MBps) [2024-12-06T13:29:06.509Z] Copying: 640/1024 [MB] (27 MBps) [2024-12-06T13:29:07.447Z] Copying: 668/1024 [MB] (27 MBps) [2024-12-06T13:29:08.385Z] Copying: 696/1024 [MB] (27 MBps) [2024-12-06T13:29:09.322Z] Copying: 725/1024 [MB] (28 MBps) [2024-12-06T13:29:10.259Z] Copying: 753/1024 [MB] (28 MBps) [2024-12-06T13:29:11.195Z] Copying: 780/1024 [MB] (27 MBps) [2024-12-06T13:29:12.130Z] Copying: 807/1024 [MB] (26 MBps) [2024-12-06T13:29:13.508Z] Copying: 834/1024 [MB] (27 MBps) [2024-12-06T13:29:14.444Z] Copying: 862/1024 [MB] (27 MBps) [2024-12-06T13:29:15.381Z] Copying: 889/1024 [MB] (27 MBps) [2024-12-06T13:29:16.313Z] Copying: 917/1024 [MB] (27 MBps) [2024-12-06T13:29:17.337Z] Copying: 944/1024 [MB] (27 MBps) [2024-12-06T13:29:18.273Z] Copying: 972/1024 [MB] (27 MBps) [2024-12-06T13:29:19.210Z] Copying: 999/1024 [MB] (27 MBps) [2024-12-06T13:29:19.470Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-06 13:29:19.385120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.942 [2024-12-06 13:29:19.385481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:12.942 [2024-12-06 13:29:19.385520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:12.942 [2024-12-06 13:29:19.385539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.942 [2024-12-06 13:29:19.385585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:12.942 [2024-12-06 13:29:19.389713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.942 [2024-12-06 13:29:19.389756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:12.942 [2024-12-06 13:29:19.389787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.098 ms 00:35:12.942 [2024-12-06 13:29:19.389801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.942 [2024-12-06 13:29:19.390129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.942 [2024-12-06 13:29:19.390168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:12.942 [2024-12-06 13:29:19.390185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:35:12.942 [2024-12-06 13:29:19.390201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.942 [2024-12-06 13:29:19.402914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.942 [2024-12-06 13:29:19.402972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:12.942 [2024-12-06 13:29:19.402995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.684 ms 00:35:12.942 [2024-12-06 13:29:19.403011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.942 [2024-12-06 13:29:19.412552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.942 [2024-12-06 13:29:19.412604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:12.942 [2024-12-06 13:29:19.412636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.491 ms 00:35:12.942 [2024-12-06 13:29:19.412651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.942 [2024-12-06 13:29:19.452682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.942 [2024-12-06 13:29:19.452741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:12.942 [2024-12-06 13:29:19.452764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.935 ms 00:35:12.942 [2024-12-06 13:29:19.452779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.203 [2024-12-06 13:29:19.474030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:13.203 [2024-12-06 13:29:19.474274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:13.203 [2024-12-06 13:29:19.474330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.193 ms 00:35:13.203 [2024-12-06 13:29:19.474348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.203 [2024-12-06 13:29:19.476114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:13.203 [2024-12-06 13:29:19.476171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:13.203 [2024-12-06 13:29:19.476193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:35:13.203 [2024-12-06 13:29:19.476219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.203 [2024-12-06 13:29:19.515122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:13.203 [2024-12-06 13:29:19.515183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:13.203 [2024-12-06 13:29:19.515216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.874 ms 00:35:13.203 [2024-12-06 13:29:19.515231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.203 [2024-12-06 13:29:19.553596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:13.203 [2024-12-06 13:29:19.553816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:13.203 [2024-12-06 13:29:19.553877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.305 ms 00:35:13.203 [2024-12-06 13:29:19.553895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.203 [2024-12-06 13:29:19.590624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:13.203 [2024-12-06 13:29:19.590707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:13.203 [2024-12-06 13:29:19.590741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.647 ms 00:35:13.203 [2024-12-06 13:29:19.590764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.203 [2024-12-06 13:29:19.623380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:13.203 [2024-12-06 13:29:19.623427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:13.203 [2024-12-06 13:29:19.623478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.455 ms 00:35:13.203 [2024-12-06 13:29:19.623507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.203 [2024-12-06 13:29:19.623557] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:13.203 [2024-12-06 13:29:19.623591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:13.203 [2024-12-06 13:29:19.623612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:35:13.203 [2024-12-06 13:29:19.623625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.623999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:13.203 [2024-12-06 13:29:19.624278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:13.204 [2024-12-06 13:29:19.624956] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:13.204 [2024-12-06 13:29:19.624969] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df65a8bd-07a1-4165-8672-9faf2c9274d0 00:35:13.204 [2024-12-06 13:29:19.624982] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:35:13.204 [2024-12-06 13:29:19.624993] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135104 00:35:13.204 [2024-12-06 13:29:19.625010] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133120 00:35:13.204 [2024-12-06 13:29:19.625024] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:35:13.204 [2024-12-06 13:29:19.625035] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:13.204 [2024-12-06 13:29:19.625067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:13.204 [2024-12-06 13:29:19.625079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:13.204 [2024-12-06 13:29:19.625090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:13.204 [2024-12-06 13:29:19.625100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:13.204 [2024-12-06 13:29:19.625112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:13.204 [2024-12-06 13:29:19.625123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:13.204 [2024-12-06 13:29:19.625136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:35:13.204 [2024-12-06 13:29:19.625148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.204 [2024-12-06 13:29:19.640953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:13.204 [2024-12-06 13:29:19.641149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:13.204 [2024-12-06 13:29:19.641273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.742 ms 00:35:13.204 [2024-12-06 13:29:19.641327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.204 [2024-12-06 13:29:19.641791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:13.204 [2024-12-06 13:29:19.641931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:13.204 [2024-12-06 13:29:19.642043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:35:13.204 [2024-12-06 13:29:19.642159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.204 [2024-12-06 13:29:19.687043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.204 [2024-12-06 13:29:19.687257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:13.204 [2024-12-06 13:29:19.687418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.204 [2024-12-06 13:29:19.687441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.204 [2024-12-06 13:29:19.687518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.204 [2024-12-06 13:29:19.687533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:13.204 [2024-12-06 13:29:19.687546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.204 [2024-12-06 13:29:19.687558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.204 [2024-12-06 13:29:19.687659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.204 [2024-12-06 13:29:19.687694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:13.204 [2024-12-06 13:29:19.687707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.204 [2024-12-06 13:29:19.687718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.204 [2024-12-06 13:29:19.687778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.204 [2024-12-06 13:29:19.687793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:13.204 [2024-12-06 13:29:19.687806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.204 [2024-12-06 13:29:19.687818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.464 [2024-12-06 13:29:19.784897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.464 [2024-12-06 13:29:19.784990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:13.464 [2024-12-06 13:29:19.785027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.464 [2024-12-06 13:29:19.785039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.464 [2024-12-06 13:29:19.863405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.464 [2024-12-06 13:29:19.863469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:13.464 [2024-12-06 13:29:19.863507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.464 [2024-12-06 13:29:19.863518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.464 [2024-12-06 13:29:19.863639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.464 [2024-12-06 13:29:19.863661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:13.464 [2024-12-06 13:29:19.863673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.464 [2024-12-06 13:29:19.863684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.464 [2024-12-06 13:29:19.863755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.464 [2024-12-06 13:29:19.863787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:13.464 [2024-12-06 13:29:19.863799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.464 [2024-12-06 13:29:19.863810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.464 [2024-12-06 13:29:19.863971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.464 [2024-12-06 13:29:19.863992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:13.464 [2024-12-06 13:29:19.864013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.464 [2024-12-06 13:29:19.864025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.464 [2024-12-06 13:29:19.864112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.464 [2024-12-06 13:29:19.864132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:13.464 [2024-12-06 13:29:19.864145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.464 [2024-12-06 13:29:19.864156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.464 [2024-12-06 13:29:19.864203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.464 [2024-12-06 13:29:19.864219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:13.464 [2024-12-06 13:29:19.864239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.464 [2024-12-06 13:29:19.864251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.464 [2024-12-06 13:29:19.864315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.464 [2024-12-06 13:29:19.864332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:13.464 [2024-12-06 13:29:19.864360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.464 [2024-12-06 13:29:19.864372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.464 [2024-12-06 13:29:19.864515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 479.385 ms, result 0 00:35:14.408 00:35:14.408 00:35:14.408 13:29:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:16.939 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:16.939 13:29:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:16.939 [2024-12-06 13:29:23.014497] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:35:16.940 [2024-12-06 13:29:23.014665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83320 ] 00:35:16.940 [2024-12-06 13:29:23.200683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.940 [2024-12-06 13:29:23.325423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.197 [2024-12-06 13:29:23.632195] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:17.197 [2024-12-06 13:29:23.632483] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:17.456 [2024-12-06 13:29:23.792877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.456 [2024-12-06 13:29:23.792939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:17.456 [2024-12-06 13:29:23.792957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:35:17.456 [2024-12-06 13:29:23.792968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.456 [2024-12-06 13:29:23.793030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.456 [2024-12-06 13:29:23.793051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:17.456 [2024-12-06 13:29:23.793063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:35:17.456 [2024-12-06 13:29:23.793073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.456 [2024-12-06 13:29:23.793101] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:17.456 [2024-12-06 13:29:23.794116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:17.456 [2024-12-06 13:29:23.794152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.456 [2024-12-06 13:29:23.794167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:17.456 [2024-12-06 13:29:23.794180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:35:17.456 [2024-12-06 13:29:23.794191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.456 [2024-12-06 13:29:23.795448] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:17.456 [2024-12-06 13:29:23.811444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.456 [2024-12-06 13:29:23.811501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:17.456 [2024-12-06 13:29:23.811534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.996 ms 00:35:17.456 [2024-12-06 13:29:23.811545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.456 [2024-12-06 13:29:23.811658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.456 [2024-12-06 13:29:23.811681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:17.456 [2024-12-06 13:29:23.811695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:17.456 [2024-12-06 13:29:23.811706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.456 [2024-12-06 13:29:23.816450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.456 [2024-12-06 13:29:23.816677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:17.456 [2024-12-06 13:29:23.816708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.618 ms 00:35:17.456 [2024-12-06 13:29:23.816729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.456 [2024-12-06 13:29:23.816830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.456 [2024-12-06 13:29:23.816888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:17.456 [2024-12-06 13:29:23.816903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:35:17.456 [2024-12-06 13:29:23.816914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.456 [2024-12-06 13:29:23.816982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.456 [2024-12-06 13:29:23.817000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:17.456 [2024-12-06 13:29:23.817013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:17.456 [2024-12-06 13:29:23.817024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.456 [2024-12-06 13:29:23.817065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:17.456 [2024-12-06 13:29:23.821521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.456 [2024-12-06 13:29:23.821576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:17.456 [2024-12-06 13:29:23.821623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.466 ms 00:35:17.456 [2024-12-06 13:29:23.821635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.457 [2024-12-06 13:29:23.821678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.457 [2024-12-06 13:29:23.821694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:17.457 [2024-12-06 13:29:23.821706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:17.457 [2024-12-06 13:29:23.821717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.457 [2024-12-06 13:29:23.821789] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:17.457 [2024-12-06 13:29:23.821826] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:17.457 [2024-12-06 13:29:23.821895] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:17.457 [2024-12-06 13:29:23.821940] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:17.457 [2024-12-06 13:29:23.822087] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:17.457 [2024-12-06 13:29:23.822101] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:17.457 [2024-12-06 13:29:23.822113] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:17.457 [2024-12-06 13:29:23.822126] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:17.457 [2024-12-06 13:29:23.822138] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:17.457 [2024-12-06 13:29:23.822148] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:17.457 [2024-12-06 13:29:23.822158] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:17.457 [2024-12-06 13:29:23.822171] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:17.457 [2024-12-06 13:29:23.822180] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:17.457 [2024-12-06 13:29:23.822191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.457 [2024-12-06 13:29:23.822201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:17.457 [2024-12-06 13:29:23.822212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:35:17.457 [2024-12-06 13:29:23.822221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.457 [2024-12-06 13:29:23.822324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.457 [2024-12-06 13:29:23.822339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:17.457 [2024-12-06 13:29:23.822349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:35:17.457 [2024-12-06 13:29:23.822358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.457 [2024-12-06 13:29:23.822463] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:17.457 [2024-12-06 13:29:23.822481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:17.457 [2024-12-06 13:29:23.822492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:17.457 [2024-12-06 13:29:23.822502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:17.457 [2024-12-06 13:29:23.822511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:17.457 [2024-12-06 13:29:23.822520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:17.457 [2024-12-06 13:29:23.822529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:17.457 [2024-12-06 13:29:23.822539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:17.457 [2024-12-06 13:29:23.822548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:17.457 [2024-12-06 13:29:23.822560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:17.457 [2024-12-06 13:29:23.822611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:17.457 [2024-12-06 13:29:23.822630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:17.457 [2024-12-06 13:29:23.822641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:17.457 [2024-12-06 13:29:23.822666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:17.457 [2024-12-06 13:29:23.822677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:17.457 [2024-12-06 13:29:23.822688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:17.457 [2024-12-06 13:29:23.822698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:17.457 [2024-12-06 13:29:23.822710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:17.457 [2024-12-06 13:29:23.822720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:17.457 [2024-12-06 13:29:23.822730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:17.457 [2024-12-06 13:29:23.822740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:17.457 [2024-12-06 13:29:23.822750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:17.457 [2024-12-06 13:29:23.822760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:17.457 [2024-12-06 13:29:23.822771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:17.457 [2024-12-06 13:29:23.822780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:17.457 [2024-12-06 13:29:23.822790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:17.457 [2024-12-06 13:29:23.822801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:17.457 [2024-12-06 13:29:23.822810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:17.457 [2024-12-06 13:29:23.822820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:17.457 [2024-12-06 13:29:23.822830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:17.457 [2024-12-06 13:29:23.822840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:17.457 [2024-12-06 13:29:23.822850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:17.457 [2024-12-06 13:29:23.822860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:17.457 [2024-12-06 13:29:23.822870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:17.457 [2024-12-06 13:29:23.822880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:17.457 [2024-12-06 13:29:23.822891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:17.457 [2024-12-06 13:29:23.822916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:17.457 [2024-12-06 13:29:23.822940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:17.457 [2024-12-06 13:29:23.822965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:17.457 [2024-12-06 13:29:23.822988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:17.457 [2024-12-06 13:29:23.823310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:17.457 [2024-12-06 13:29:23.823372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:17.457 [2024-12-06 13:29:23.823412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:17.457 [2024-12-06 13:29:23.823446] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:17.457 [2024-12-06 13:29:23.823480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:17.457 [2024-12-06 13:29:23.823644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:17.457 [2024-12-06 13:29:23.823699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:17.457 [2024-12-06 13:29:23.823759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:17.457 [2024-12-06 13:29:23.823806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:17.457 [2024-12-06 13:29:23.823859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:17.457 [2024-12-06 13:29:23.823980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:17.457 [2024-12-06 13:29:23.824035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:17.457 [2024-12-06 13:29:23.824086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:17.457 [2024-12-06 13:29:23.824231] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:17.457 [2024-12-06 13:29:23.824395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:17.457 [2024-12-06 13:29:23.824539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:17.457 [2024-12-06 13:29:23.824732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:17.457 [2024-12-06 13:29:23.824869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:17.457 [2024-12-06 13:29:23.825044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:17.457 [2024-12-06 13:29:23.825059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:17.457 [2024-12-06 13:29:23.825069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:17.457 [2024-12-06 13:29:23.825079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:17.457 [2024-12-06 13:29:23.825089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:17.457 [2024-12-06 13:29:23.825099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:17.457 [2024-12-06 13:29:23.825109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:17.457 [2024-12-06 13:29:23.825119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:17.457 [2024-12-06 13:29:23.825129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:17.457 [2024-12-06 13:29:23.825139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:17.457 [2024-12-06 13:29:23.825149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:17.457 [2024-12-06 13:29:23.825159] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:17.457 [2024-12-06 13:29:23.825171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:17.457 [2024-12-06 13:29:23.825182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:17.458 [2024-12-06 13:29:23.825192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:17.458 [2024-12-06 13:29:23.825202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:17.458 [2024-12-06 13:29:23.825212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:17.458 [2024-12-06 13:29:23.825225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.825235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:17.458 [2024-12-06 13:29:23.825247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.821 ms 00:35:17.458 [2024-12-06 13:29:23.825258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.458 [2024-12-06 13:29:23.855711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.855792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:17.458 [2024-12-06 13:29:23.855827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.384 ms 00:35:17.458 [2024-12-06 13:29:23.855843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.458 [2024-12-06 13:29:23.856011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.856029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:17.458 [2024-12-06 13:29:23.856041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:35:17.458 [2024-12-06 13:29:23.856053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.458 [2024-12-06 13:29:23.905088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.905139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:17.458 [2024-12-06 13:29:23.905172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.912 ms 00:35:17.458 [2024-12-06 13:29:23.905182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.458 [2024-12-06 13:29:23.905238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.905255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:17.458 [2024-12-06 13:29:23.905271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:17.458 [2024-12-06 13:29:23.905281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.458 [2024-12-06 13:29:23.905674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.905692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:17.458 [2024-12-06 13:29:23.905703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:35:17.458 [2024-12-06 13:29:23.905712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.458 [2024-12-06 13:29:23.905854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.906110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:17.458 [2024-12-06 13:29:23.906177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:35:17.458 [2024-12-06 13:29:23.906215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.458 [2024-12-06 13:29:23.921255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.921430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:17.458 [2024-12-06 13:29:23.921582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.982 ms 00:35:17.458 [2024-12-06 13:29:23.921633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.458 [2024-12-06 13:29:23.936221] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:17.458 [2024-12-06 13:29:23.936434] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:17.458 [2024-12-06 13:29:23.936563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.936605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:17.458 [2024-12-06 13:29:23.936622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.773 ms 00:35:17.458 [2024-12-06 13:29:23.936633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.458 [2024-12-06 13:29:23.962829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.963048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:17.458 [2024-12-06 13:29:23.963080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.151 ms 00:35:17.458 [2024-12-06 13:29:23.963108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.458 [2024-12-06 13:29:23.979764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.458 [2024-12-06 13:29:23.979808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:17.458 [2024-12-06 13:29:23.979825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.591 ms 00:35:17.458 [2024-12-06 13:29:23.979836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.715 [2024-12-06 13:29:23.994699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.715 [2024-12-06 13:29:23.994736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:17.715 [2024-12-06 13:29:23.994766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.791 ms 00:35:17.716 [2024-12-06 13:29:23.994775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.716 [2024-12-06 13:29:23.995632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.716 [2024-12-06 13:29:23.995667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:17.716 [2024-12-06 13:29:23.995686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:35:17.716 [2024-12-06 13:29:23.995696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.716 [2024-12-06 13:29:24.061299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.716 [2024-12-06 13:29:24.061367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:17.716 [2024-12-06 13:29:24.061408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.574 ms 00:35:17.716 [2024-12-06 13:29:24.061419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.716 [2024-12-06 13:29:24.073895] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:17.716 [2024-12-06 13:29:24.076381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.716 [2024-12-06 13:29:24.076589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:17.716 [2024-12-06 13:29:24.076617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.891 ms 00:35:17.716 [2024-12-06 13:29:24.076630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.716 [2024-12-06 13:29:24.076732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.716 [2024-12-06 13:29:24.076752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:17.716 [2024-12-06 13:29:24.076769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:17.716 [2024-12-06 13:29:24.076780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.716 [2024-12-06 13:29:24.077458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.716 [2024-12-06 13:29:24.077484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:17.716 [2024-12-06 13:29:24.077497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:35:17.716 [2024-12-06 13:29:24.077507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.716 [2024-12-06 13:29:24.077540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.716 [2024-12-06 13:29:24.077555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:17.716 [2024-12-06 13:29:24.077581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:17.716 [2024-12-06 13:29:24.077606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.716 [2024-12-06 13:29:24.077666] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:17.716 [2024-12-06 13:29:24.077683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.716 [2024-12-06 13:29:24.077693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:17.716 [2024-12-06 13:29:24.077704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:35:17.716 [2024-12-06 13:29:24.077714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.716 [2024-12-06 13:29:24.106988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.716 [2024-12-06 13:29:24.107035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:17.716 [2024-12-06 13:29:24.107060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.246 ms 00:35:17.716 [2024-12-06 13:29:24.107071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.716 [2024-12-06 13:29:24.107154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:17.716 [2024-12-06 13:29:24.107175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:17.716 [2024-12-06 13:29:24.107188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:35:17.716 [2024-12-06 13:29:24.107199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:17.716 [2024-12-06 13:29:24.108579] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 315.188 ms, result 0 00:35:19.087  [2024-12-06T13:29:26.550Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-06T13:29:27.484Z] Copying: 49/1024 [MB] (24 MBps) [2024-12-06T13:29:28.419Z] Copying: 72/1024 [MB] (23 MBps) [2024-12-06T13:29:29.353Z] Copying: 95/1024 [MB] (22 MBps) [2024-12-06T13:29:30.729Z] Copying: 119/1024 [MB] (23 MBps) [2024-12-06T13:29:31.665Z] Copying: 141/1024 [MB] (22 MBps) [2024-12-06T13:29:32.600Z] Copying: 165/1024 [MB] (23 MBps) [2024-12-06T13:29:33.537Z] Copying: 189/1024 [MB] (24 MBps) [2024-12-06T13:29:34.473Z] Copying: 214/1024 [MB] (24 MBps) [2024-12-06T13:29:35.407Z] Copying: 237/1024 [MB] (23 MBps) [2024-12-06T13:29:36.343Z] Copying: 261/1024 [MB] (24 MBps) [2024-12-06T13:29:37.723Z] Copying: 286/1024 [MB] (24 MBps) [2024-12-06T13:29:38.661Z] Copying: 309/1024 [MB] (23 MBps) [2024-12-06T13:29:39.595Z] Copying: 334/1024 [MB] (24 MBps) [2024-12-06T13:29:40.528Z] Copying: 359/1024 [MB] (24 MBps) [2024-12-06T13:29:41.462Z] Copying: 385/1024 [MB] (26 MBps) [2024-12-06T13:29:42.398Z] Copying: 410/1024 [MB] (25 MBps) [2024-12-06T13:29:43.333Z] Copying: 434/1024 [MB] (24 MBps) [2024-12-06T13:29:44.322Z] Copying: 459/1024 [MB] (24 MBps) [2024-12-06T13:29:45.694Z] Copying: 483/1024 [MB] (24 MBps) [2024-12-06T13:29:46.629Z] Copying: 508/1024 [MB] (25 MBps) [2024-12-06T13:29:47.563Z] Copying: 532/1024 [MB] (24 MBps) [2024-12-06T13:29:48.500Z] Copying: 557/1024 [MB] (24 MBps) [2024-12-06T13:29:49.434Z] Copying: 580/1024 [MB] (23 MBps) [2024-12-06T13:29:50.372Z] Copying: 605/1024 [MB] (24 MBps) [2024-12-06T13:29:51.311Z] Copying: 628/1024 [MB] (22 MBps) [2024-12-06T13:29:52.686Z] Copying: 650/1024 [MB] (22 MBps) [2024-12-06T13:29:53.623Z] Copying: 672/1024 [MB] (22 MBps) [2024-12-06T13:29:54.564Z] Copying: 694/1024 [MB] (22 MBps) [2024-12-06T13:29:55.499Z] Copying: 717/1024 [MB] (22 MBps) [2024-12-06T13:29:56.432Z] Copying: 740/1024 [MB] (23 MBps) [2024-12-06T13:29:57.370Z] Copying: 763/1024 [MB] (23 MBps) [2024-12-06T13:29:58.307Z] Copying: 787/1024 [MB] (23 MBps) [2024-12-06T13:29:59.689Z] Copying: 811/1024 [MB] (24 MBps) [2024-12-06T13:30:00.626Z] Copying: 834/1024 [MB] (23 MBps) [2024-12-06T13:30:01.563Z] Copying: 858/1024 [MB] (23 MBps) [2024-12-06T13:30:02.499Z] Copying: 881/1024 [MB] (23 MBps) [2024-12-06T13:30:03.437Z] Copying: 905/1024 [MB] (23 MBps) [2024-12-06T13:30:04.404Z] Copying: 929/1024 [MB] (24 MBps) [2024-12-06T13:30:05.337Z] Copying: 953/1024 [MB] (23 MBps) [2024-12-06T13:30:06.713Z] Copying: 977/1024 [MB] (23 MBps) [2024-12-06T13:30:07.652Z] Copying: 1000/1024 [MB] (23 MBps) [2024-12-06T13:30:07.652Z] Copying: 1022/1024 [MB] (22 MBps) [2024-12-06T13:30:07.652Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-06 13:30:07.462628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.124 [2024-12-06 13:30:07.462950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:01.124 [2024-12-06 13:30:07.462988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:01.124 [2024-12-06 13:30:07.463004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.124 [2024-12-06 13:30:07.463050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:01.124 [2024-12-06 13:30:07.467363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.124 [2024-12-06 13:30:07.467568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:01.124 [2024-12-06 13:30:07.467713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.281 ms 00:36:01.124 [2024-12-06 13:30:07.467812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.124 [2024-12-06 13:30:07.468200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.124 [2024-12-06 13:30:07.468366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:01.124 [2024-12-06 13:30:07.468499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:36:01.124 [2024-12-06 13:30:07.468680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.124 [2024-12-06 13:30:07.474224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.124 [2024-12-06 13:30:07.474415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:01.124 [2024-12-06 13:30:07.474552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.463 ms 00:36:01.124 [2024-12-06 13:30:07.474724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.124 [2024-12-06 13:30:07.483173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.124 [2024-12-06 13:30:07.483281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:01.124 [2024-12-06 13:30:07.483343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.367 ms 00:36:01.124 [2024-12-06 13:30:07.483389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.124 [2024-12-06 13:30:07.522733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.124 [2024-12-06 13:30:07.522961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:01.124 [2024-12-06 13:30:07.523117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.211 ms 00:36:01.124 [2024-12-06 13:30:07.523177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.124 [2024-12-06 13:30:07.546540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.124 [2024-12-06 13:30:07.546750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:01.124 [2024-12-06 13:30:07.546957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.163 ms 00:36:01.124 [2024-12-06 13:30:07.547023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.124 [2024-12-06 13:30:07.549215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.124 [2024-12-06 13:30:07.549422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:01.124 [2024-12-06 13:30:07.549563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.011 ms 00:36:01.124 [2024-12-06 13:30:07.549622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.124 [2024-12-06 13:30:07.588602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.124 [2024-12-06 13:30:07.588827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:01.124 [2024-12-06 13:30:07.588983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.905 ms 00:36:01.125 [2024-12-06 13:30:07.589042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.125 [2024-12-06 13:30:07.627663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.125 [2024-12-06 13:30:07.627710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:01.125 [2024-12-06 13:30:07.627730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.436 ms 00:36:01.125 [2024-12-06 13:30:07.627743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.385 [2024-12-06 13:30:07.665474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.385 [2024-12-06 13:30:07.665527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:01.385 [2024-12-06 13:30:07.665546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.672 ms 00:36:01.385 [2024-12-06 13:30:07.665560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.385 [2024-12-06 13:30:07.702948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.385 [2024-12-06 13:30:07.703000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:01.385 [2024-12-06 13:30:07.703020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.281 ms 00:36:01.385 [2024-12-06 13:30:07.703033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.385 [2024-12-06 13:30:07.703083] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:01.385 [2024-12-06 13:30:07.703118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:01.385 [2024-12-06 13:30:07.703150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:36:01.385 [2024-12-06 13:30:07.703166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:01.385 [2024-12-06 13:30:07.703690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.703991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:01.386 [2024-12-06 13:30:07.704584] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:01.386 [2024-12-06 13:30:07.704597] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df65a8bd-07a1-4165-8672-9faf2c9274d0 00:36:01.386 [2024-12-06 13:30:07.704611] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:36:01.386 [2024-12-06 13:30:07.704628] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:01.386 [2024-12-06 13:30:07.704641] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:01.386 [2024-12-06 13:30:07.704654] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:01.386 [2024-12-06 13:30:07.704680] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:01.386 [2024-12-06 13:30:07.704694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:01.386 [2024-12-06 13:30:07.704707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:01.386 [2024-12-06 13:30:07.704718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:01.386 [2024-12-06 13:30:07.704730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:01.386 [2024-12-06 13:30:07.704744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.386 [2024-12-06 13:30:07.704757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:01.386 [2024-12-06 13:30:07.704772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:36:01.386 [2024-12-06 13:30:07.704790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.386 [2024-12-06 13:30:07.725021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.386 [2024-12-06 13:30:07.725066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:01.386 [2024-12-06 13:30:07.725085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.183 ms 00:36:01.386 [2024-12-06 13:30:07.725099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.386 [2024-12-06 13:30:07.725617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:01.386 [2024-12-06 13:30:07.725658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:01.386 [2024-12-06 13:30:07.725675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:36:01.386 [2024-12-06 13:30:07.725688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.386 [2024-12-06 13:30:07.767159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.386 [2024-12-06 13:30:07.767202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:01.386 [2024-12-06 13:30:07.767230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.386 [2024-12-06 13:30:07.767240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.386 [2024-12-06 13:30:07.767318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.386 [2024-12-06 13:30:07.767337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:01.386 [2024-12-06 13:30:07.767347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.386 [2024-12-06 13:30:07.767356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.386 [2024-12-06 13:30:07.767464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.386 [2024-12-06 13:30:07.767481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:01.386 [2024-12-06 13:30:07.767492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.386 [2024-12-06 13:30:07.767502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.386 [2024-12-06 13:30:07.767522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.386 [2024-12-06 13:30:07.767534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:01.387 [2024-12-06 13:30:07.767549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.387 [2024-12-06 13:30:07.767558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.387 [2024-12-06 13:30:07.848601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.387 [2024-12-06 13:30:07.848677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:01.387 [2024-12-06 13:30:07.848709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.387 [2024-12-06 13:30:07.848719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.646 [2024-12-06 13:30:07.922308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.646 [2024-12-06 13:30:07.922385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:01.646 [2024-12-06 13:30:07.922416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.646 [2024-12-06 13:30:07.922436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.646 [2024-12-06 13:30:07.922528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.646 [2024-12-06 13:30:07.922543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:01.646 [2024-12-06 13:30:07.922553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.646 [2024-12-06 13:30:07.922563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.646 [2024-12-06 13:30:07.922600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.646 [2024-12-06 13:30:07.922613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:01.646 [2024-12-06 13:30:07.922623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.646 [2024-12-06 13:30:07.922652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.646 [2024-12-06 13:30:07.922781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.646 [2024-12-06 13:30:07.922799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:01.646 [2024-12-06 13:30:07.922810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.646 [2024-12-06 13:30:07.922820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.646 [2024-12-06 13:30:07.922862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.646 [2024-12-06 13:30:07.922877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:01.646 [2024-12-06 13:30:07.922946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.646 [2024-12-06 13:30:07.922959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.646 [2024-12-06 13:30:07.923009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.646 [2024-12-06 13:30:07.923023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:01.646 [2024-12-06 13:30:07.923034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.646 [2024-12-06 13:30:07.923044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.646 [2024-12-06 13:30:07.923092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.646 [2024-12-06 13:30:07.923113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:01.646 [2024-12-06 13:30:07.923125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.646 [2024-12-06 13:30:07.923140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.646 [2024-12-06 13:30:07.923331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.634 ms, result 0 00:36:02.214 00:36:02.214 00:36:02.473 13:30:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:36:04.378 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:36:04.378 13:30:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:36:04.378 13:30:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:36:04.378 13:30:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:04.378 13:30:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:04.378 13:30:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:36:04.637 13:30:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:04.637 13:30:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:36:04.637 13:30:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81462 00:36:04.637 13:30:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81462 ']' 00:36:04.637 13:30:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81462 00:36:04.637 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81462) - No such process 00:36:04.637 Process with pid 81462 is not found 00:36:04.637 13:30:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81462 is not found' 00:36:04.637 13:30:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:36:04.896 13:30:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:36:04.896 Remove shared memory files 00:36:04.896 13:30:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:04.896 13:30:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:36:04.896 13:30:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:36:04.896 13:30:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:36:04.896 13:30:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:04.896 13:30:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:36:04.896 00:36:04.896 real 3m53.048s 00:36:04.896 user 4m28.842s 00:36:04.896 sys 0m38.198s 00:36:04.896 13:30:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:04.896 13:30:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:04.896 ************************************ 00:36:04.896 END TEST ftl_dirty_shutdown 00:36:04.896 ************************************ 00:36:04.896 13:30:11 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:36:04.896 13:30:11 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:04.896 13:30:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:04.896 13:30:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:04.896 ************************************ 00:36:04.896 START TEST ftl_upgrade_shutdown 00:36:04.896 ************************************ 00:36:04.896 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:36:04.896 * Looking for test storage... 00:36:04.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:36:04.896 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:04.896 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:36:04.896 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:05.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.155 --rc genhtml_branch_coverage=1 00:36:05.155 --rc genhtml_function_coverage=1 00:36:05.155 --rc genhtml_legend=1 00:36:05.155 --rc geninfo_all_blocks=1 00:36:05.155 --rc geninfo_unexecuted_blocks=1 00:36:05.155 00:36:05.155 ' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:05.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.155 --rc genhtml_branch_coverage=1 00:36:05.155 --rc genhtml_function_coverage=1 00:36:05.155 --rc genhtml_legend=1 00:36:05.155 --rc geninfo_all_blocks=1 00:36:05.155 --rc geninfo_unexecuted_blocks=1 00:36:05.155 00:36:05.155 ' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:05.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.155 --rc genhtml_branch_coverage=1 00:36:05.155 --rc genhtml_function_coverage=1 00:36:05.155 --rc genhtml_legend=1 00:36:05.155 --rc geninfo_all_blocks=1 00:36:05.155 --rc geninfo_unexecuted_blocks=1 00:36:05.155 00:36:05.155 ' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:05.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.155 --rc genhtml_branch_coverage=1 00:36:05.155 --rc genhtml_function_coverage=1 00:36:05.155 --rc genhtml_legend=1 00:36:05.155 --rc geninfo_all_blocks=1 00:36:05.155 --rc geninfo_unexecuted_blocks=1 00:36:05.155 00:36:05.155 ' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:36:05.155 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83858 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83858 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83858 ']' 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.156 13:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:05.156 [2024-12-06 13:30:11.619600] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:36:05.156 [2024-12-06 13:30:11.619810] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83858 ] 00:36:05.414 [2024-12-06 13:30:11.802370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.414 [2024-12-06 13:30:11.916477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:36:06.350 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:36:06.609 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:36:06.609 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:36:06.609 13:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:36:06.609 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:36:06.609 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:06.609 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:36:06.609 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:36:06.609 13:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:36:06.868 13:30:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:06.868 { 00:36:06.868 "name": "basen1", 00:36:06.868 "aliases": [ 00:36:06.868 "d0fff93a-35b1-44ae-9671-cc9542620b5d" 00:36:06.868 ], 00:36:06.868 "product_name": "NVMe disk", 00:36:06.868 "block_size": 4096, 00:36:06.868 "num_blocks": 1310720, 00:36:06.868 "uuid": "d0fff93a-35b1-44ae-9671-cc9542620b5d", 00:36:06.868 "numa_id": -1, 00:36:06.868 "assigned_rate_limits": { 00:36:06.868 "rw_ios_per_sec": 0, 00:36:06.868 "rw_mbytes_per_sec": 0, 00:36:06.868 "r_mbytes_per_sec": 0, 00:36:06.868 "w_mbytes_per_sec": 0 00:36:06.868 }, 00:36:06.868 "claimed": true, 00:36:06.868 "claim_type": "read_many_write_one", 00:36:06.868 "zoned": false, 00:36:06.868 "supported_io_types": { 00:36:06.868 "read": true, 00:36:06.868 "write": true, 00:36:06.868 "unmap": true, 00:36:06.868 "flush": true, 00:36:06.868 "reset": true, 00:36:06.868 "nvme_admin": true, 00:36:06.868 "nvme_io": true, 00:36:06.868 "nvme_io_md": false, 00:36:06.868 "write_zeroes": true, 00:36:06.868 "zcopy": false, 00:36:06.868 "get_zone_info": false, 00:36:06.868 "zone_management": false, 00:36:06.868 "zone_append": false, 00:36:06.868 "compare": true, 00:36:06.868 "compare_and_write": false, 00:36:06.868 "abort": true, 00:36:06.868 "seek_hole": false, 00:36:06.869 "seek_data": false, 00:36:06.869 "copy": true, 00:36:06.869 "nvme_iov_md": false 00:36:06.869 }, 00:36:06.869 "driver_specific": { 00:36:06.869 "nvme": [ 00:36:06.869 { 00:36:06.869 "pci_address": "0000:00:11.0", 00:36:06.869 "trid": { 00:36:06.869 "trtype": "PCIe", 00:36:06.869 "traddr": "0000:00:11.0" 00:36:06.869 }, 00:36:06.869 "ctrlr_data": { 00:36:06.869 "cntlid": 0, 00:36:06.869 "vendor_id": "0x1b36", 00:36:06.869 "model_number": "QEMU NVMe Ctrl", 00:36:06.869 "serial_number": "12341", 00:36:06.869 "firmware_revision": "8.0.0", 00:36:06.869 "subnqn": "nqn.2019-08.org.qemu:12341", 00:36:06.869 "oacs": { 00:36:06.869 "security": 0, 00:36:06.869 "format": 1, 00:36:06.869 "firmware": 0, 00:36:06.869 "ns_manage": 1 00:36:06.869 }, 00:36:06.869 "multi_ctrlr": false, 00:36:06.869 "ana_reporting": false 00:36:06.869 }, 00:36:06.869 "vs": { 00:36:06.869 "nvme_version": "1.4" 00:36:06.869 }, 00:36:06.869 "ns_data": { 00:36:06.869 "id": 1, 00:36:06.869 "can_share": false 00:36:06.869 } 00:36:06.869 } 00:36:06.869 ], 00:36:06.869 "mp_policy": "active_passive" 00:36:06.869 } 00:36:06.869 } 00:36:06.869 ]' 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:06.869 13:30:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:07.128 13:30:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=461d21bc-1f09-4639-8537-69b3c5c5194b 00:36:07.128 13:30:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:36:07.128 13:30:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 461d21bc-1f09-4639-8537-69b3c5c5194b 00:36:07.388 13:30:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:36:07.647 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=c1441fd8-c2f8-4867-b9bf-de8af70c5430 00:36:07.647 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c1441fd8-c2f8-4867-b9bf-de8af70c5430 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=7a44889a-3a6a-44ce-a812-1b09d267adea 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 7a44889a-3a6a-44ce-a812-1b09d267adea ]] 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 7a44889a-3a6a-44ce-a812-1b09d267adea 5120 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=7a44889a-3a6a-44ce-a812-1b09d267adea 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7a44889a-3a6a-44ce-a812-1b09d267adea 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=7a44889a-3a6a-44ce-a812-1b09d267adea 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:36:07.906 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a44889a-3a6a-44ce-a812-1b09d267adea 00:36:08.165 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:08.165 { 00:36:08.165 "name": "7a44889a-3a6a-44ce-a812-1b09d267adea", 00:36:08.165 "aliases": [ 00:36:08.165 "lvs/basen1p0" 00:36:08.165 ], 00:36:08.165 "product_name": "Logical Volume", 00:36:08.165 "block_size": 4096, 00:36:08.165 "num_blocks": 5242880, 00:36:08.165 "uuid": "7a44889a-3a6a-44ce-a812-1b09d267adea", 00:36:08.165 "assigned_rate_limits": { 00:36:08.165 "rw_ios_per_sec": 0, 00:36:08.165 "rw_mbytes_per_sec": 0, 00:36:08.165 "r_mbytes_per_sec": 0, 00:36:08.165 "w_mbytes_per_sec": 0 00:36:08.165 }, 00:36:08.165 "claimed": false, 00:36:08.165 "zoned": false, 00:36:08.165 "supported_io_types": { 00:36:08.165 "read": true, 00:36:08.165 "write": true, 00:36:08.165 "unmap": true, 00:36:08.165 "flush": false, 00:36:08.165 "reset": true, 00:36:08.165 "nvme_admin": false, 00:36:08.165 "nvme_io": false, 00:36:08.166 "nvme_io_md": false, 00:36:08.166 "write_zeroes": true, 00:36:08.166 "zcopy": false, 00:36:08.166 "get_zone_info": false, 00:36:08.166 "zone_management": false, 00:36:08.166 "zone_append": false, 00:36:08.166 "compare": false, 00:36:08.166 "compare_and_write": false, 00:36:08.166 "abort": false, 00:36:08.166 "seek_hole": true, 00:36:08.166 "seek_data": true, 00:36:08.166 "copy": false, 00:36:08.166 "nvme_iov_md": false 00:36:08.166 }, 00:36:08.166 "driver_specific": { 00:36:08.166 "lvol": { 00:36:08.166 "lvol_store_uuid": "c1441fd8-c2f8-4867-b9bf-de8af70c5430", 00:36:08.166 "base_bdev": "basen1", 00:36:08.166 "thin_provision": true, 00:36:08.166 "num_allocated_clusters": 0, 00:36:08.166 "snapshot": false, 00:36:08.166 "clone": false, 00:36:08.166 "esnap_clone": false 00:36:08.166 } 00:36:08.166 } 00:36:08.166 } 00:36:08.166 ]' 00:36:08.425 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:08.425 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:36:08.425 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:08.425 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:36:08.425 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:36:08.425 13:30:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:36:08.425 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:36:08.425 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:36:08.425 13:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:36:08.685 13:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:36:08.685 13:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:36:08.685 13:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:36:08.944 13:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:36:08.944 13:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:36:08.944 13:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 7a44889a-3a6a-44ce-a812-1b09d267adea -c cachen1p0 --l2p_dram_limit 2 00:36:09.203 [2024-12-06 13:30:15.642004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.642065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:09.203 [2024-12-06 13:30:15.642085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:09.203 [2024-12-06 13:30:15.642095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.642168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.642184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:09.203 [2024-12-06 13:30:15.642197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:36:09.203 [2024-12-06 13:30:15.642206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.642233] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:09.203 [2024-12-06 13:30:15.643068] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:09.203 [2024-12-06 13:30:15.643098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.643109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:09.203 [2024-12-06 13:30:15.643125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.869 ms 00:36:09.203 [2024-12-06 13:30:15.643135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.643273] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d3c6f60d-c4ff-4f3c-8b6d-18447d891aed 00:36:09.203 [2024-12-06 13:30:15.644252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.644284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:36:09.203 [2024-12-06 13:30:15.644297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:36:09.203 [2024-12-06 13:30:15.644308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.648498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.648572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:09.203 [2024-12-06 13:30:15.648585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.146 ms 00:36:09.203 [2024-12-06 13:30:15.648597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.648652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.648670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:09.203 [2024-12-06 13:30:15.648681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:36:09.203 [2024-12-06 13:30:15.648695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.648767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.648787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:09.203 [2024-12-06 13:30:15.648801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:36:09.203 [2024-12-06 13:30:15.648813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.648856] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:09.203 [2024-12-06 13:30:15.652947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.652991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:09.203 [2024-12-06 13:30:15.653007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.095 ms 00:36:09.203 [2024-12-06 13:30:15.653017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.653051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.653064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:09.203 [2024-12-06 13:30:15.653076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:09.203 [2024-12-06 13:30:15.653086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.653126] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:36:09.203 [2024-12-06 13:30:15.653309] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:09.203 [2024-12-06 13:30:15.653330] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:09.203 [2024-12-06 13:30:15.653343] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:09.203 [2024-12-06 13:30:15.653358] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:09.203 [2024-12-06 13:30:15.653370] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:09.203 [2024-12-06 13:30:15.653384] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:09.203 [2024-12-06 13:30:15.653394] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:09.203 [2024-12-06 13:30:15.653410] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:09.203 [2024-12-06 13:30:15.653420] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:09.203 [2024-12-06 13:30:15.653432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.653442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:09.203 [2024-12-06 13:30:15.653454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:36:09.203 [2024-12-06 13:30:15.653464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.653552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.203 [2024-12-06 13:30:15.653582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:09.203 [2024-12-06 13:30:15.653597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:36:09.203 [2024-12-06 13:30:15.653607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.203 [2024-12-06 13:30:15.653720] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:09.203 [2024-12-06 13:30:15.653736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:09.203 [2024-12-06 13:30:15.653749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:09.203 [2024-12-06 13:30:15.653759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:09.203 [2024-12-06 13:30:15.653772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:09.203 [2024-12-06 13:30:15.653781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:09.203 [2024-12-06 13:30:15.653793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:09.203 [2024-12-06 13:30:15.653803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:09.203 [2024-12-06 13:30:15.653814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:09.203 [2024-12-06 13:30:15.653823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:09.203 [2024-12-06 13:30:15.653852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:09.203 [2024-12-06 13:30:15.653867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:09.203 [2024-12-06 13:30:15.653879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:09.203 [2024-12-06 13:30:15.653889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:09.203 [2024-12-06 13:30:15.653900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:09.203 [2024-12-06 13:30:15.653910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:09.203 [2024-12-06 13:30:15.653923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:09.203 [2024-12-06 13:30:15.653933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:09.203 [2024-12-06 13:30:15.653944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:09.203 [2024-12-06 13:30:15.653953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:09.203 [2024-12-06 13:30:15.653964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:09.203 [2024-12-06 13:30:15.653973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:09.203 [2024-12-06 13:30:15.653998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:09.204 [2024-12-06 13:30:15.654007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:09.204 [2024-12-06 13:30:15.654018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:09.204 [2024-12-06 13:30:15.654027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:09.204 [2024-12-06 13:30:15.654038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:09.204 [2024-12-06 13:30:15.654047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:09.204 [2024-12-06 13:30:15.654058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:09.204 [2024-12-06 13:30:15.654067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:09.204 [2024-12-06 13:30:15.654077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:09.204 [2024-12-06 13:30:15.654086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:09.204 [2024-12-06 13:30:15.654099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:09.204 [2024-12-06 13:30:15.654108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:09.204 [2024-12-06 13:30:15.654120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:09.204 [2024-12-06 13:30:15.654129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:09.204 [2024-12-06 13:30:15.654141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:09.204 [2024-12-06 13:30:15.654151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:09.204 [2024-12-06 13:30:15.654162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:09.204 [2024-12-06 13:30:15.654170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:09.204 [2024-12-06 13:30:15.654181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:09.204 [2024-12-06 13:30:15.654190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:09.204 [2024-12-06 13:30:15.654202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:09.204 [2024-12-06 13:30:15.654211] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:09.204 [2024-12-06 13:30:15.654223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:09.204 [2024-12-06 13:30:15.654233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:09.204 [2024-12-06 13:30:15.654258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:09.204 [2024-12-06 13:30:15.654268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:09.204 [2024-12-06 13:30:15.654281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:09.204 [2024-12-06 13:30:15.654290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:09.204 [2024-12-06 13:30:15.654301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:09.204 [2024-12-06 13:30:15.654309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:09.204 [2024-12-06 13:30:15.654320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:09.204 [2024-12-06 13:30:15.654331] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:09.204 [2024-12-06 13:30:15.654347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:09.204 [2024-12-06 13:30:15.654370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:09.204 [2024-12-06 13:30:15.654401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:09.204 [2024-12-06 13:30:15.654412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:09.204 [2024-12-06 13:30:15.654422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:09.204 [2024-12-06 13:30:15.654435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:09.204 [2024-12-06 13:30:15.654527] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:09.204 [2024-12-06 13:30:15.654539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:09.204 [2024-12-06 13:30:15.654562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:09.204 [2024-12-06 13:30:15.654572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:09.204 [2024-12-06 13:30:15.654585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:09.204 [2024-12-06 13:30:15.654596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:09.204 [2024-12-06 13:30:15.654609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:09.204 [2024-12-06 13:30:15.654619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.942 ms 00:36:09.204 [2024-12-06 13:30:15.654631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:09.204 [2024-12-06 13:30:15.654676] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:36:09.204 [2024-12-06 13:30:15.654695] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:36:13.390 [2024-12-06 13:30:19.477417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.390 [2024-12-06 13:30:19.477523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:36:13.390 [2024-12-06 13:30:19.477544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3822.763 ms 00:36:13.390 [2024-12-06 13:30:19.477557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.390 [2024-12-06 13:30:19.505667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.390 [2024-12-06 13:30:19.505734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:13.390 [2024-12-06 13:30:19.505752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.883 ms 00:36:13.390 [2024-12-06 13:30:19.505765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.390 [2024-12-06 13:30:19.505886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.390 [2024-12-06 13:30:19.505908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:13.390 [2024-12-06 13:30:19.505919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:36:13.390 [2024-12-06 13:30:19.505936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.390 [2024-12-06 13:30:19.540056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.390 [2024-12-06 13:30:19.540120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:13.390 [2024-12-06 13:30:19.540136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.024 ms 00:36:13.390 [2024-12-06 13:30:19.540151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.390 [2024-12-06 13:30:19.540226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.390 [2024-12-06 13:30:19.540245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:13.390 [2024-12-06 13:30:19.540256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:13.391 [2024-12-06 13:30:19.540268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.540617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.540638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:13.391 [2024-12-06 13:30:19.540660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.283 ms 00:36:13.391 [2024-12-06 13:30:19.540673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.540718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.540733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:13.391 [2024-12-06 13:30:19.540746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:36:13.391 [2024-12-06 13:30:19.540759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.556610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.556671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:13.391 [2024-12-06 13:30:19.556687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.830 ms 00:36:13.391 [2024-12-06 13:30:19.556699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.579590] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:13.391 [2024-12-06 13:30:19.580612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.580658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:13.391 [2024-12-06 13:30:19.580677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.778 ms 00:36:13.391 [2024-12-06 13:30:19.580689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.613351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.613409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:36:13.391 [2024-12-06 13:30:19.613428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.619 ms 00:36:13.391 [2024-12-06 13:30:19.613438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.613533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.613553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:13.391 [2024-12-06 13:30:19.613569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:36:13.391 [2024-12-06 13:30:19.613579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.640667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.640716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:36:13.391 [2024-12-06 13:30:19.640733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.028 ms 00:36:13.391 [2024-12-06 13:30:19.640744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.667875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.667909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:36:13.391 [2024-12-06 13:30:19.667926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.082 ms 00:36:13.391 [2024-12-06 13:30:19.667936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.668591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.668614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:13.391 [2024-12-06 13:30:19.668628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.613 ms 00:36:13.391 [2024-12-06 13:30:19.668641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.761327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.761386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:36:13.391 [2024-12-06 13:30:19.761408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 92.639 ms 00:36:13.391 [2024-12-06 13:30:19.761419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.787540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.787602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:36:13.391 [2024-12-06 13:30:19.787621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.013 ms 00:36:13.391 [2024-12-06 13:30:19.787632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.812954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.813001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:36:13.391 [2024-12-06 13:30:19.813017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.277 ms 00:36:13.391 [2024-12-06 13:30:19.813027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.838623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.838671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:36:13.391 [2024-12-06 13:30:19.838688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.554 ms 00:36:13.391 [2024-12-06 13:30:19.838699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.838746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.838761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:13.391 [2024-12-06 13:30:19.838776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:13.391 [2024-12-06 13:30:19.838786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.838884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.391 [2024-12-06 13:30:19.838934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:13.391 [2024-12-06 13:30:19.838962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:36:13.391 [2024-12-06 13:30:19.838973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.391 [2024-12-06 13:30:19.840251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4197.701 ms, result 0 00:36:13.391 { 00:36:13.391 "name": "ftl", 00:36:13.391 "uuid": "d3c6f60d-c4ff-4f3c-8b6d-18447d891aed" 00:36:13.391 } 00:36:13.391 13:30:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:36:13.648 [2024-12-06 13:30:20.143359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.648 13:30:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:36:13.906 13:30:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:36:14.164 [2024-12-06 13:30:20.631953] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:14.164 13:30:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:36:14.422 [2024-12-06 13:30:20.873045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:14.422 13:30:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:36:14.989 Fill FTL, iteration 1 00:36:14.989 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:36:14.989 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:36:14.989 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:36:14.989 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:36:14.989 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:36:14.989 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:36:14.989 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:36:14.989 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:36:14.989 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:36:14.989 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83994 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83994 /var/tmp/spdk.tgt.sock 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83994 ']' 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:36:14.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:14.990 13:30:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:14.990 [2024-12-06 13:30:21.343844] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:36:14.990 [2024-12-06 13:30:21.343999] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83994 ] 00:36:14.990 [2024-12-06 13:30:21.506597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.248 [2024-12-06 13:30:21.598020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.813 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.813 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:15.813 13:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:36:16.378 ftln1 00:36:16.378 13:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:36:16.378 13:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83994 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83994 ']' 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83994 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83994 00:36:16.636 killing process with pid 83994 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83994' 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83994 00:36:16.636 13:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83994 00:36:18.539 13:30:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:36:18.539 13:30:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:36:18.539 [2024-12-06 13:30:24.795397] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:36:18.539 [2024-12-06 13:30:24.795570] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84040 ] 00:36:18.539 [2024-12-06 13:30:24.976434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.798 [2024-12-06 13:30:25.066029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:20.175  [2024-12-06T13:30:27.654Z] Copying: 210/1024 [MB] (210 MBps) [2024-12-06T13:30:28.593Z] Copying: 422/1024 [MB] (212 MBps) [2024-12-06T13:30:29.530Z] Copying: 637/1024 [MB] (215 MBps) [2024-12-06T13:30:30.467Z] Copying: 849/1024 [MB] (212 MBps) [2024-12-06T13:30:31.405Z] Copying: 1024/1024 [MB] (average 212 MBps) 00:36:24.877 00:36:24.877 Calculate MD5 checksum, iteration 1 00:36:24.877 13:30:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:36:24.877 13:30:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:36:24.877 13:30:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:24.877 13:30:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:24.877 13:30:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:24.877 13:30:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:24.877 13:30:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:24.877 13:30:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:24.877 [2024-12-06 13:30:31.224499] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:36:24.877 [2024-12-06 13:30:31.224635] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84100 ] 00:36:24.877 [2024-12-06 13:30:31.389417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.136 [2024-12-06 13:30:31.475722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:26.516  [2024-12-06T13:30:33.981Z] Copying: 482/1024 [MB] (482 MBps) [2024-12-06T13:30:34.241Z] Copying: 964/1024 [MB] (482 MBps) [2024-12-06T13:30:34.864Z] Copying: 1024/1024 [MB] (average 482 MBps) 00:36:28.336 00:36:28.336 13:30:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:36:28.336 13:30:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:36:30.871 Fill FTL, iteration 2 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=972199def19643bc394fe844ef8d6e57 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:30.871 13:30:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:36:30.871 [2024-12-06 13:30:36.851118] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:36:30.871 [2024-12-06 13:30:36.851277] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84162 ] 00:36:30.871 [2024-12-06 13:30:37.024297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.871 [2024-12-06 13:30:37.154527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:32.250  [2024-12-06T13:30:39.716Z] Copying: 207/1024 [MB] (207 MBps) [2024-12-06T13:30:40.653Z] Copying: 409/1024 [MB] (202 MBps) [2024-12-06T13:30:41.590Z] Copying: 608/1024 [MB] (199 MBps) [2024-12-06T13:30:42.968Z] Copying: 812/1024 [MB] (204 MBps) [2024-12-06T13:30:42.968Z] Copying: 1023/1024 [MB] (211 MBps) [2024-12-06T13:30:43.906Z] Copying: 1024/1024 [MB] (average 204 MBps) 00:36:37.378 00:36:37.378 Calculate MD5 checksum, iteration 2 00:36:37.378 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:36:37.378 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:36:37.378 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:37.378 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:37.378 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:37.378 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:37.378 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:37.378 13:30:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:37.378 [2024-12-06 13:30:43.759089] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:36:37.378 [2024-12-06 13:30:43.760147] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84232 ] 00:36:37.637 [2024-12-06 13:30:43.945084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.637 [2024-12-06 13:30:44.036259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.536  [2024-12-06T13:30:46.629Z] Copying: 489/1024 [MB] (489 MBps) [2024-12-06T13:30:46.885Z] Copying: 981/1024 [MB] (492 MBps) [2024-12-06T13:30:48.260Z] Copying: 1024/1024 [MB] (average 489 MBps) 00:36:41.732 00:36:41.732 13:30:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:36:41.732 13:30:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:43.690 13:30:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:36:43.690 13:30:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fe3baee20618415985fcced1e8af10ad 00:36:43.690 13:30:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:36:43.690 13:30:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:36:43.690 13:30:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:36:43.950 [2024-12-06 13:30:50.219734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:43.950 [2024-12-06 13:30:50.219905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:36:43.950 [2024-12-06 13:30:50.219927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:36:43.950 [2024-12-06 13:30:50.219940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:43.950 [2024-12-06 13:30:50.219977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:43.950 [2024-12-06 13:30:50.220000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:36:43.950 [2024-12-06 13:30:50.220012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:43.950 [2024-12-06 13:30:50.220024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:43.950 [2024-12-06 13:30:50.220054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:43.950 [2024-12-06 13:30:50.220067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:36:43.950 [2024-12-06 13:30:50.220079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:36:43.950 [2024-12-06 13:30:50.220089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:43.950 [2024-12-06 13:30:50.220198] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.427 ms, result 0 00:36:43.950 true 00:36:43.950 13:30:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:43.950 { 00:36:43.950 "name": "ftl", 00:36:43.950 "properties": [ 00:36:43.950 { 00:36:43.950 "name": "superblock_version", 00:36:43.950 "value": 5, 00:36:43.950 "read-only": true 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "name": "base_device", 00:36:43.950 "bands": [ 00:36:43.950 { 00:36:43.950 "id": 0, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 1, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 2, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 3, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 4, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 5, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 6, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 7, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 8, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 9, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 10, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 11, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 12, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 13, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 14, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 15, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 16, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 17, 00:36:43.950 "state": "FREE", 00:36:43.950 "validity": 0.0 00:36:43.950 } 00:36:43.950 ], 00:36:43.950 "read-only": true 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "name": "cache_device", 00:36:43.950 "type": "bdev", 00:36:43.950 "chunks": [ 00:36:43.950 { 00:36:43.950 "id": 0, 00:36:43.950 "state": "INACTIVE", 00:36:43.950 "utilization": 0.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 1, 00:36:43.950 "state": "CLOSED", 00:36:43.950 "utilization": 1.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 2, 00:36:43.950 "state": "CLOSED", 00:36:43.950 "utilization": 1.0 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 3, 00:36:43.950 "state": "OPEN", 00:36:43.950 "utilization": 0.001953125 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "id": 4, 00:36:43.950 "state": "OPEN", 00:36:43.950 "utilization": 0.0 00:36:43.950 } 00:36:43.950 ], 00:36:43.950 "read-only": true 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "name": "verbose_mode", 00:36:43.950 "value": true, 00:36:43.950 "unit": "", 00:36:43.950 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:36:43.950 }, 00:36:43.950 { 00:36:43.950 "name": "prep_upgrade_on_shutdown", 00:36:43.950 "value": false, 00:36:43.950 "unit": "", 00:36:43.950 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:36:43.950 } 00:36:43.950 ] 00:36:43.950 } 00:36:43.951 13:30:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:36:44.210 [2024-12-06 13:30:50.680271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.210 [2024-12-06 13:30:50.680354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:36:44.210 [2024-12-06 13:30:50.680388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:44.210 [2024-12-06 13:30:50.680399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.210 [2024-12-06 13:30:50.680430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.210 [2024-12-06 13:30:50.680443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:36:44.210 [2024-12-06 13:30:50.680454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:44.210 [2024-12-06 13:30:50.680463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.210 [2024-12-06 13:30:50.680486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.210 [2024-12-06 13:30:50.680498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:36:44.210 [2024-12-06 13:30:50.680508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:36:44.210 [2024-12-06 13:30:50.680517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.210 [2024-12-06 13:30:50.680582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.298 ms, result 0 00:36:44.210 true 00:36:44.210 13:30:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:36:44.210 13:30:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:44.210 13:30:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:36:44.469 13:30:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:36:44.470 13:30:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:36:44.470 13:30:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:36:45.038 [2024-12-06 13:30:51.256963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:45.038 [2024-12-06 13:30:51.257038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:36:45.038 [2024-12-06 13:30:51.257071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:36:45.038 [2024-12-06 13:30:51.257081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:45.038 [2024-12-06 13:30:51.257112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:45.038 [2024-12-06 13:30:51.257125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:36:45.038 [2024-12-06 13:30:51.257135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:45.038 [2024-12-06 13:30:51.257144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:45.038 [2024-12-06 13:30:51.257177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:45.038 [2024-12-06 13:30:51.257205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:36:45.038 [2024-12-06 13:30:51.257215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:36:45.038 [2024-12-06 13:30:51.257224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:45.038 [2024-12-06 13:30:51.257324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.330 ms, result 0 00:36:45.038 true 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:45.038 { 00:36:45.038 "name": "ftl", 00:36:45.038 "properties": [ 00:36:45.038 { 00:36:45.038 "name": "superblock_version", 00:36:45.038 "value": 5, 00:36:45.038 "read-only": true 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "name": "base_device", 00:36:45.038 "bands": [ 00:36:45.038 { 00:36:45.038 "id": 0, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 1, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 2, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 3, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 4, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 5, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 6, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 7, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 8, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 9, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 10, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 11, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 12, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 13, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 14, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 15, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 16, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 17, 00:36:45.038 "state": "FREE", 00:36:45.038 "validity": 0.0 00:36:45.038 } 00:36:45.038 ], 00:36:45.038 "read-only": true 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "name": "cache_device", 00:36:45.038 "type": "bdev", 00:36:45.038 "chunks": [ 00:36:45.038 { 00:36:45.038 "id": 0, 00:36:45.038 "state": "INACTIVE", 00:36:45.038 "utilization": 0.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 1, 00:36:45.038 "state": "CLOSED", 00:36:45.038 "utilization": 1.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 2, 00:36:45.038 "state": "CLOSED", 00:36:45.038 "utilization": 1.0 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 3, 00:36:45.038 "state": "OPEN", 00:36:45.038 "utilization": 0.001953125 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "id": 4, 00:36:45.038 "state": "OPEN", 00:36:45.038 "utilization": 0.0 00:36:45.038 } 00:36:45.038 ], 00:36:45.038 "read-only": true 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "name": "verbose_mode", 00:36:45.038 "value": true, 00:36:45.038 "unit": "", 00:36:45.038 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:36:45.038 }, 00:36:45.038 { 00:36:45.038 "name": "prep_upgrade_on_shutdown", 00:36:45.038 "value": true, 00:36:45.038 "unit": "", 00:36:45.038 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:36:45.038 } 00:36:45.038 ] 00:36:45.038 } 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83858 ]] 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83858 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83858 ']' 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83858 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83858 00:36:45.038 killing process with pid 83858 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83858' 00:36:45.038 13:30:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83858 00:36:45.039 13:30:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83858 00:36:45.977 [2024-12-06 13:30:52.453664] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:36:45.977 [2024-12-06 13:30:52.469334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:45.977 [2024-12-06 13:30:52.469376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:36:45.977 [2024-12-06 13:30:52.469423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:45.977 [2024-12-06 13:30:52.469433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:45.977 [2024-12-06 13:30:52.469462] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:36:45.977 [2024-12-06 13:30:52.472708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:45.977 [2024-12-06 13:30:52.472767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:36:45.977 [2024-12-06 13:30:52.472793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.226 ms 00:36:45.977 [2024-12-06 13:30:52.472803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.836435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.963 [2024-12-06 13:31:00.836510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:36:55.963 [2024-12-06 13:31:00.836549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8363.650 ms 00:36:55.963 [2024-12-06 13:31:00.836559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.837841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.963 [2024-12-06 13:31:00.837901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:36:55.963 [2024-12-06 13:31:00.837917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.261 ms 00:36:55.963 [2024-12-06 13:31:00.837929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.839225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.963 [2024-12-06 13:31:00.839268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:36:55.963 [2024-12-06 13:31:00.839296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.229 ms 00:36:55.963 [2024-12-06 13:31:00.839312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.850091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.963 [2024-12-06 13:31:00.850125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:36:55.963 [2024-12-06 13:31:00.850153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.727 ms 00:36:55.963 [2024-12-06 13:31:00.850163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.857057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.963 [2024-12-06 13:31:00.857108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:36:55.963 [2024-12-06 13:31:00.857137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.857 ms 00:36:55.963 [2024-12-06 13:31:00.857147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.857218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.963 [2024-12-06 13:31:00.857239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:36:55.963 [2024-12-06 13:31:00.857250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:36:55.963 [2024-12-06 13:31:00.857259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.867658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.963 [2024-12-06 13:31:00.867705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:36:55.963 [2024-12-06 13:31:00.867733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.380 ms 00:36:55.963 [2024-12-06 13:31:00.867742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.878306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.963 [2024-12-06 13:31:00.878351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:36:55.963 [2024-12-06 13:31:00.878379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.528 ms 00:36:55.963 [2024-12-06 13:31:00.878387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.888794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.963 [2024-12-06 13:31:00.888842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:36:55.963 [2024-12-06 13:31:00.888878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.372 ms 00:36:55.963 [2024-12-06 13:31:00.888890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.898966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.963 [2024-12-06 13:31:00.899013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:36:55.963 [2024-12-06 13:31:00.899041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.011 ms 00:36:55.963 [2024-12-06 13:31:00.899050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.963 [2024-12-06 13:31:00.899083] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:36:55.963 [2024-12-06 13:31:00.899114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:55.963 [2024-12-06 13:31:00.899127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:36:55.963 [2024-12-06 13:31:00.899137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:36:55.963 [2024-12-06 13:31:00.899148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:55.963 [2024-12-06 13:31:00.899157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:55.963 [2024-12-06 13:31:00.899167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:55.963 [2024-12-06 13:31:00.899176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:55.963 [2024-12-06 13:31:00.899186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:55.963 [2024-12-06 13:31:00.899195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:55.963 [2024-12-06 13:31:00.899205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:55.963 [2024-12-06 13:31:00.899214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:55.963 [2024-12-06 13:31:00.899224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:55.964 [2024-12-06 13:31:00.899233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:55.964 [2024-12-06 13:31:00.899258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:55.964 [2024-12-06 13:31:00.899284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:55.964 [2024-12-06 13:31:00.899310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:55.964 [2024-12-06 13:31:00.899321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:55.964 [2024-12-06 13:31:00.899331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:55.964 [2024-12-06 13:31:00.899344] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:36:55.964 [2024-12-06 13:31:00.899355] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d3c6f60d-c4ff-4f3c-8b6d-18447d891aed 00:36:55.964 [2024-12-06 13:31:00.899365] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:36:55.964 [2024-12-06 13:31:00.899374] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:36:55.964 [2024-12-06 13:31:00.899384] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:36:55.964 [2024-12-06 13:31:00.899394] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:36:55.964 [2024-12-06 13:31:00.899410] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:36:55.964 [2024-12-06 13:31:00.899420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:36:55.964 [2024-12-06 13:31:00.899433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:36:55.964 [2024-12-06 13:31:00.899442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:36:55.964 [2024-12-06 13:31:00.899451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:36:55.964 [2024-12-06 13:31:00.899460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.964 [2024-12-06 13:31:00.899470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:36:55.964 [2024-12-06 13:31:00.899482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:36:55.964 [2024-12-06 13:31:00.899492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:00.913741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.964 [2024-12-06 13:31:00.913790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:36:55.964 [2024-12-06 13:31:00.913827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.228 ms 00:36:55.964 [2024-12-06 13:31:00.913837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:00.914263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:55.964 [2024-12-06 13:31:00.914299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:36:55.964 [2024-12-06 13:31:00.914312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.388 ms 00:36:55.964 [2024-12-06 13:31:00.914323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:00.961461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:00.961512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:55.964 [2024-12-06 13:31:00.961542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:00.961552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:00.961614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:00.961628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:55.964 [2024-12-06 13:31:00.961638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:00.961648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:00.961815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:00.961865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:55.964 [2024-12-06 13:31:00.961885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:00.961895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:00.961920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:00.961933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:55.964 [2024-12-06 13:31:00.961960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:00.961972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:01.046222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:01.046282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:55.964 [2024-12-06 13:31:01.046320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:01.046330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:01.113610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:01.113659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:55.964 [2024-12-06 13:31:01.113691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:01.113700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:01.113792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:01.113807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:55.964 [2024-12-06 13:31:01.113833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:01.113880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:01.113970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:01.113987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:55.964 [2024-12-06 13:31:01.113999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:01.114008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:01.114117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:01.114154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:55.964 [2024-12-06 13:31:01.114167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:01.114176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:01.114233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:01.114249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:36:55.964 [2024-12-06 13:31:01.114260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:01.114269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:01.114310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:01.114329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:55.964 [2024-12-06 13:31:01.114340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:01.114349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:01.114404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:55.964 [2024-12-06 13:31:01.114420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:55.964 [2024-12-06 13:31:01.114431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:55.964 [2024-12-06 13:31:01.114440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:55.964 [2024-12-06 13:31:01.114574] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8645.267 ms, result 0 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84452 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84452 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84452 ']' 00:36:57.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:57.868 13:31:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:57.868 [2024-12-06 13:31:04.240984] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:36:57.868 [2024-12-06 13:31:04.241172] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84452 ] 00:36:58.126 [2024-12-06 13:31:04.408202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.126 [2024-12-06 13:31:04.487551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:58.693 [2024-12-06 13:31:05.210706] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:58.693 [2024-12-06 13:31:05.210809] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:58.952 [2024-12-06 13:31:05.357514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.952 [2024-12-06 13:31:05.357573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:58.952 [2024-12-06 13:31:05.357607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:58.952 [2024-12-06 13:31:05.357617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.952 [2024-12-06 13:31:05.357680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.952 [2024-12-06 13:31:05.357696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:58.952 [2024-12-06 13:31:05.357706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:36:58.952 [2024-12-06 13:31:05.357715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.952 [2024-12-06 13:31:05.357750] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:58.952 [2024-12-06 13:31:05.358662] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:58.952 [2024-12-06 13:31:05.358709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.952 [2024-12-06 13:31:05.358721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:58.952 [2024-12-06 13:31:05.358731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.972 ms 00:36:58.952 [2024-12-06 13:31:05.358740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.952 [2024-12-06 13:31:05.359927] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:36:58.952 [2024-12-06 13:31:05.373369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.952 [2024-12-06 13:31:05.373422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:36:58.952 [2024-12-06 13:31:05.373444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.444 ms 00:36:58.952 [2024-12-06 13:31:05.373454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.952 [2024-12-06 13:31:05.373518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.952 [2024-12-06 13:31:05.373534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:36:58.952 [2024-12-06 13:31:05.373545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:36:58.952 [2024-12-06 13:31:05.373554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.953 [2024-12-06 13:31:05.377648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.953 [2024-12-06 13:31:05.377701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:58.953 [2024-12-06 13:31:05.377730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.005 ms 00:36:58.953 [2024-12-06 13:31:05.377739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.953 [2024-12-06 13:31:05.377810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.953 [2024-12-06 13:31:05.377826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:58.953 [2024-12-06 13:31:05.377837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:36:58.953 [2024-12-06 13:31:05.377847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.953 [2024-12-06 13:31:05.377918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.953 [2024-12-06 13:31:05.377939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:58.953 [2024-12-06 13:31:05.377949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:36:58.953 [2024-12-06 13:31:05.377958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.953 [2024-12-06 13:31:05.378022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:58.953 [2024-12-06 13:31:05.381559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.953 [2024-12-06 13:31:05.381607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:58.953 [2024-12-06 13:31:05.381635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.561 ms 00:36:58.953 [2024-12-06 13:31:05.381649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.953 [2024-12-06 13:31:05.381680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.953 [2024-12-06 13:31:05.381693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:58.953 [2024-12-06 13:31:05.381703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:58.953 [2024-12-06 13:31:05.381712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.953 [2024-12-06 13:31:05.381739] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:36:58.953 [2024-12-06 13:31:05.381766] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:36:58.953 [2024-12-06 13:31:05.381802] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:36:58.953 [2024-12-06 13:31:05.381819] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:36:58.953 [2024-12-06 13:31:05.381980] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:58.953 [2024-12-06 13:31:05.381998] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:58.953 [2024-12-06 13:31:05.382012] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:58.953 [2024-12-06 13:31:05.382025] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:58.953 [2024-12-06 13:31:05.382037] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:58.953 [2024-12-06 13:31:05.382053] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:58.953 [2024-12-06 13:31:05.382062] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:58.953 [2024-12-06 13:31:05.382072] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:58.953 [2024-12-06 13:31:05.382081] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:58.953 [2024-12-06 13:31:05.382092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.953 [2024-12-06 13:31:05.382102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:58.953 [2024-12-06 13:31:05.382112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:36:58.953 [2024-12-06 13:31:05.382122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.953 [2024-12-06 13:31:05.382209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.953 [2024-12-06 13:31:05.382222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:58.953 [2024-12-06 13:31:05.382237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:36:58.953 [2024-12-06 13:31:05.382246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.953 [2024-12-06 13:31:05.382373] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:58.953 [2024-12-06 13:31:05.382401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:58.953 [2024-12-06 13:31:05.382413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:58.953 [2024-12-06 13:31:05.382425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:58.953 [2024-12-06 13:31:05.382446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:58.953 [2024-12-06 13:31:05.382464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:58.953 [2024-12-06 13:31:05.382474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:58.953 [2024-12-06 13:31:05.382482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:58.953 [2024-12-06 13:31:05.382501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:58.953 [2024-12-06 13:31:05.382510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:58.953 [2024-12-06 13:31:05.382533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:58.953 [2024-12-06 13:31:05.382541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:58.953 [2024-12-06 13:31:05.382559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:58.953 [2024-12-06 13:31:05.382568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:58.953 [2024-12-06 13:31:05.382587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:58.953 [2024-12-06 13:31:05.382596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:58.953 [2024-12-06 13:31:05.382605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:58.953 [2024-12-06 13:31:05.382627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:58.953 [2024-12-06 13:31:05.382636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:58.953 [2024-12-06 13:31:05.382645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:58.953 [2024-12-06 13:31:05.382654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:58.953 [2024-12-06 13:31:05.382662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:58.953 [2024-12-06 13:31:05.382671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:58.953 [2024-12-06 13:31:05.382681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:58.953 [2024-12-06 13:31:05.382689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:58.953 [2024-12-06 13:31:05.382698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:58.953 [2024-12-06 13:31:05.382707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:58.953 [2024-12-06 13:31:05.382715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:58.953 [2024-12-06 13:31:05.382734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:58.953 [2024-12-06 13:31:05.382743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:58.953 [2024-12-06 13:31:05.382761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:58.953 [2024-12-06 13:31:05.382788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:58.953 [2024-12-06 13:31:05.382797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382806] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:58.953 [2024-12-06 13:31:05.382817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:58.953 [2024-12-06 13:31:05.382829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:58.953 [2024-12-06 13:31:05.382858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:58.953 [2024-12-06 13:31:05.382893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:58.953 [2024-12-06 13:31:05.382903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:58.953 [2024-12-06 13:31:05.382912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:58.953 [2024-12-06 13:31:05.382922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:58.953 [2024-12-06 13:31:05.382931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:58.953 [2024-12-06 13:31:05.382941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:58.953 [2024-12-06 13:31:05.382952] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:58.953 [2024-12-06 13:31:05.382965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:58.953 [2024-12-06 13:31:05.382977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:58.953 [2024-12-06 13:31:05.382987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:58.953 [2024-12-06 13:31:05.382997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:58.953 [2024-12-06 13:31:05.383007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:58.954 [2024-12-06 13:31:05.383017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:58.954 [2024-12-06 13:31:05.383027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:58.954 [2024-12-06 13:31:05.383036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:58.954 [2024-12-06 13:31:05.383047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:58.954 [2024-12-06 13:31:05.383057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:58.954 [2024-12-06 13:31:05.383067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:58.954 [2024-12-06 13:31:05.383077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:58.954 [2024-12-06 13:31:05.383086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:58.954 [2024-12-06 13:31:05.383096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:58.954 [2024-12-06 13:31:05.383107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:58.954 [2024-12-06 13:31:05.383117] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:58.954 [2024-12-06 13:31:05.383128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:58.954 [2024-12-06 13:31:05.383139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:58.954 [2024-12-06 13:31:05.383150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:58.954 [2024-12-06 13:31:05.383160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:58.954 [2024-12-06 13:31:05.383172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:58.954 [2024-12-06 13:31:05.383184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:58.954 [2024-12-06 13:31:05.383195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:58.954 [2024-12-06 13:31:05.383207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:36:58.954 [2024-12-06 13:31:05.383232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:58.954 [2024-12-06 13:31:05.383288] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:36:58.954 [2024-12-06 13:31:05.383303] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:37:01.485 [2024-12-06 13:31:07.403713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.403795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:37:01.485 [2024-12-06 13:31:07.403862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2020.436 ms 00:37:01.485 [2024-12-06 13:31:07.403875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.430066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.430129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:01.485 [2024-12-06 13:31:07.430164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.837 ms 00:37:01.485 [2024-12-06 13:31:07.430174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.430318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.430341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:01.485 [2024-12-06 13:31:07.430351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:37:01.485 [2024-12-06 13:31:07.430360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.465239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.465315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:01.485 [2024-12-06 13:31:07.465351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.795 ms 00:37:01.485 [2024-12-06 13:31:07.465360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.465418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.465431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:01.485 [2024-12-06 13:31:07.465442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:01.485 [2024-12-06 13:31:07.465451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.465836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.465875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:01.485 [2024-12-06 13:31:07.465889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:37:01.485 [2024-12-06 13:31:07.465899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.465958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.465972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:01.485 [2024-12-06 13:31:07.465982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:37:01.485 [2024-12-06 13:31:07.465992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.482121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.482177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:01.485 [2024-12-06 13:31:07.482223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.103 ms 00:37:01.485 [2024-12-06 13:31:07.482232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.510915] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:37:01.485 [2024-12-06 13:31:07.510971] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:37:01.485 [2024-12-06 13:31:07.511004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.511014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:37:01.485 [2024-12-06 13:31:07.511025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.645 ms 00:37:01.485 [2024-12-06 13:31:07.511041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.526258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.526325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:37:01.485 [2024-12-06 13:31:07.526356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.172 ms 00:37:01.485 [2024-12-06 13:31:07.526367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.540361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.540415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:37:01.485 [2024-12-06 13:31:07.540445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.948 ms 00:37:01.485 [2024-12-06 13:31:07.540455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.553897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.553947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:37:01.485 [2024-12-06 13:31:07.553977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.399 ms 00:37:01.485 [2024-12-06 13:31:07.553986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.554752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.554814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:01.485 [2024-12-06 13:31:07.554843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.655 ms 00:37:01.485 [2024-12-06 13:31:07.554863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.616942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.617015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:37:01.485 [2024-12-06 13:31:07.617049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 62.053 ms 00:37:01.485 [2024-12-06 13:31:07.617059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.627575] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:01.485 [2024-12-06 13:31:07.628406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.628452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:01.485 [2024-12-06 13:31:07.628498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.283 ms 00:37:01.485 [2024-12-06 13:31:07.628508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.628608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.628643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:37:01.485 [2024-12-06 13:31:07.628655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:01.485 [2024-12-06 13:31:07.628666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.628756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.628774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:01.485 [2024-12-06 13:31:07.628786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:37:01.485 [2024-12-06 13:31:07.628796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.628828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.628842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:01.485 [2024-12-06 13:31:07.628874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:01.485 [2024-12-06 13:31:07.628900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.628954] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:37:01.485 [2024-12-06 13:31:07.628977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.628989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:37:01.485 [2024-12-06 13:31:07.629001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:37:01.485 [2024-12-06 13:31:07.629011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.654922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.654978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:37:01.485 [2024-12-06 13:31:07.655008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.879 ms 00:37:01.485 [2024-12-06 13:31:07.655018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.655091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.655107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:01.485 [2024-12-06 13:31:07.655118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:37:01.485 [2024-12-06 13:31:07.655127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.485 [2024-12-06 13:31:07.656418] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2298.368 ms, result 0 00:37:01.485 [2024-12-06 13:31:07.671309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.485 [2024-12-06 13:31:07.687303] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:37:01.485 [2024-12-06 13:31:07.695457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:01.485 13:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:01.485 13:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:37:01.485 13:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:01.485 13:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:37:01.485 13:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:01.485 [2024-12-06 13:31:07.991550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.485 [2024-12-06 13:31:07.991593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:01.485 [2024-12-06 13:31:07.991630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:01.485 [2024-12-06 13:31:07.991640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.486 [2024-12-06 13:31:07.991674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.486 [2024-12-06 13:31:07.991687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:01.486 [2024-12-06 13:31:07.991697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:01.486 [2024-12-06 13:31:07.991705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.486 [2024-12-06 13:31:07.991727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.486 [2024-12-06 13:31:07.991738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:01.486 [2024-12-06 13:31:07.991748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:01.486 [2024-12-06 13:31:07.991757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.486 [2024-12-06 13:31:07.991862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.260 ms, result 0 00:37:01.486 true 00:37:01.486 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:01.780 { 00:37:01.780 "name": "ftl", 00:37:01.780 "properties": [ 00:37:01.780 { 00:37:01.780 "name": "superblock_version", 00:37:01.780 "value": 5, 00:37:01.780 "read-only": true 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "name": "base_device", 00:37:01.780 "bands": [ 00:37:01.780 { 00:37:01.780 "id": 0, 00:37:01.780 "state": "CLOSED", 00:37:01.780 "validity": 1.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 1, 00:37:01.780 "state": "CLOSED", 00:37:01.780 "validity": 1.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 2, 00:37:01.780 "state": "CLOSED", 00:37:01.780 "validity": 0.007843137254901933 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 3, 00:37:01.780 "state": "FREE", 00:37:01.780 "validity": 0.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 4, 00:37:01.780 "state": "FREE", 00:37:01.780 "validity": 0.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 5, 00:37:01.780 "state": "FREE", 00:37:01.780 "validity": 0.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 6, 00:37:01.780 "state": "FREE", 00:37:01.780 "validity": 0.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 7, 00:37:01.780 "state": "FREE", 00:37:01.780 "validity": 0.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 8, 00:37:01.780 "state": "FREE", 00:37:01.780 "validity": 0.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 9, 00:37:01.780 "state": "FREE", 00:37:01.780 "validity": 0.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 10, 00:37:01.780 "state": "FREE", 00:37:01.780 "validity": 0.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 11, 00:37:01.780 "state": "FREE", 00:37:01.780 "validity": 0.0 00:37:01.780 }, 00:37:01.780 { 00:37:01.780 "id": 12, 00:37:01.780 "state": "FREE", 00:37:01.781 "validity": 0.0 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "id": 13, 00:37:01.781 "state": "FREE", 00:37:01.781 "validity": 0.0 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "id": 14, 00:37:01.781 "state": "FREE", 00:37:01.781 "validity": 0.0 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "id": 15, 00:37:01.781 "state": "FREE", 00:37:01.781 "validity": 0.0 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "id": 16, 00:37:01.781 "state": "FREE", 00:37:01.781 "validity": 0.0 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "id": 17, 00:37:01.781 "state": "FREE", 00:37:01.781 "validity": 0.0 00:37:01.781 } 00:37:01.781 ], 00:37:01.781 "read-only": true 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "name": "cache_device", 00:37:01.781 "type": "bdev", 00:37:01.781 "chunks": [ 00:37:01.781 { 00:37:01.781 "id": 0, 00:37:01.781 "state": "INACTIVE", 00:37:01.781 "utilization": 0.0 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "id": 1, 00:37:01.781 "state": "OPEN", 00:37:01.781 "utilization": 0.0 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "id": 2, 00:37:01.781 "state": "OPEN", 00:37:01.781 "utilization": 0.0 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "id": 3, 00:37:01.781 "state": "FREE", 00:37:01.781 "utilization": 0.0 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "id": 4, 00:37:01.781 "state": "FREE", 00:37:01.781 "utilization": 0.0 00:37:01.781 } 00:37:01.781 ], 00:37:01.781 "read-only": true 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "name": "verbose_mode", 00:37:01.781 "value": true, 00:37:01.781 "unit": "", 00:37:01.781 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:01.781 }, 00:37:01.781 { 00:37:01.781 "name": "prep_upgrade_on_shutdown", 00:37:01.781 "value": false, 00:37:01.781 "unit": "", 00:37:01.781 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:01.781 } 00:37:01.781 ] 00:37:01.781 } 00:37:01.781 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:37:01.781 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:37:01.781 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:02.050 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:37:02.050 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:37:02.050 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:37:02.050 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:02.050 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:37:02.309 Validate MD5 checksum, iteration 1 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:02.309 13:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:02.578 [2024-12-06 13:31:08.928747] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:37:02.578 [2024-12-06 13:31:08.928950] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84518 ] 00:37:02.837 [2024-12-06 13:31:09.115178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.837 [2024-12-06 13:31:09.238075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.743  [2024-12-06T13:31:11.839Z] Copying: 497/1024 [MB] (497 MBps) [2024-12-06T13:31:12.098Z] Copying: 962/1024 [MB] (465 MBps) [2024-12-06T13:31:13.474Z] Copying: 1024/1024 [MB] (average 475 MBps) 00:37:06.946 00:37:06.946 13:31:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:37:06.946 13:31:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:08.851 Validate MD5 checksum, iteration 2 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=972199def19643bc394fe844ef8d6e57 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 972199def19643bc394fe844ef8d6e57 != \9\7\2\1\9\9\d\e\f\1\9\6\4\3\b\c\3\9\4\f\e\8\4\4\e\f\8\d\6\e\5\7 ]] 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:08.851 13:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:08.851 [2024-12-06 13:31:15.063719] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:37:08.851 [2024-12-06 13:31:15.063958] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84581 ] 00:37:08.851 [2024-12-06 13:31:15.234238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.851 [2024-12-06 13:31:15.315452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.756  [2024-12-06T13:31:17.852Z] Copying: 492/1024 [MB] (492 MBps) [2024-12-06T13:31:18.110Z] Copying: 966/1024 [MB] (474 MBps) [2024-12-06T13:31:19.047Z] Copying: 1024/1024 [MB] (average 482 MBps) 00:37:12.519 00:37:12.519 13:31:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:37:12.519 13:31:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fe3baee20618415985fcced1e8af10ad 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fe3baee20618415985fcced1e8af10ad != \f\e\3\b\a\e\e\2\0\6\1\8\4\1\5\9\8\5\f\c\c\e\d\1\e\8\a\f\1\0\a\d ]] 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84452 ]] 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84452 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84645 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84645 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84645 ']' 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:14.426 13:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:14.426 [2024-12-06 13:31:20.772175] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:37:14.426 [2024-12-06 13:31:20.772370] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84645 ] 00:37:14.426 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84452 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:37:14.426 [2024-12-06 13:31:20.938024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.686 [2024-12-06 13:31:21.022076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.254 [2024-12-06 13:31:21.736127] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:15.254 [2024-12-06 13:31:21.736197] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:15.515 [2024-12-06 13:31:21.881450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.881489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:15.515 [2024-12-06 13:31:21.881507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:15.515 [2024-12-06 13:31:21.881516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.881579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.881596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:15.515 [2024-12-06 13:31:21.881606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:37:15.515 [2024-12-06 13:31:21.881615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.881650] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:15.515 [2024-12-06 13:31:21.882435] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:15.515 [2024-12-06 13:31:21.882460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.882471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:15.515 [2024-12-06 13:31:21.882481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.824 ms 00:37:15.515 [2024-12-06 13:31:21.882491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.882941] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:37:15.515 [2024-12-06 13:31:21.899492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.899544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:37:15.515 [2024-12-06 13:31:21.899560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.552 ms 00:37:15.515 [2024-12-06 13:31:21.899570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.908797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.908832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:37:15.515 [2024-12-06 13:31:21.908864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:37:15.515 [2024-12-06 13:31:21.908875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.909253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.909277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:15.515 [2024-12-06 13:31:21.909290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.291 ms 00:37:15.515 [2024-12-06 13:31:21.909301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.909372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.909390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:15.515 [2024-12-06 13:31:21.909400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:37:15.515 [2024-12-06 13:31:21.909409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.909456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.909502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:15.515 [2024-12-06 13:31:21.909529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:37:15.515 [2024-12-06 13:31:21.909538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.909568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:15.515 [2024-12-06 13:31:21.912966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.912995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:15.515 [2024-12-06 13:31:21.913008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.404 ms 00:37:15.515 [2024-12-06 13:31:21.913017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.913057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.913072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:15.515 [2024-12-06 13:31:21.913082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:15.515 [2024-12-06 13:31:21.913091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.913131] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:37:15.515 [2024-12-06 13:31:21.913157] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:37:15.515 [2024-12-06 13:31:21.913191] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:37:15.515 [2024-12-06 13:31:21.913210] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:37:15.515 [2024-12-06 13:31:21.913297] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:37:15.515 [2024-12-06 13:31:21.913310] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:15.515 [2024-12-06 13:31:21.913321] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:37:15.515 [2024-12-06 13:31:21.913332] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:15.515 [2024-12-06 13:31:21.913343] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:15.515 [2024-12-06 13:31:21.913352] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:15.515 [2024-12-06 13:31:21.913361] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:15.515 [2024-12-06 13:31:21.913369] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:37:15.515 [2024-12-06 13:31:21.913378] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:37:15.515 [2024-12-06 13:31:21.913392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.913401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:15.515 [2024-12-06 13:31:21.913411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:37:15.515 [2024-12-06 13:31:21.913419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.913493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.515 [2024-12-06 13:31:21.913505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:15.515 [2024-12-06 13:31:21.913515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:37:15.515 [2024-12-06 13:31:21.913523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.515 [2024-12-06 13:31:21.913611] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:15.515 [2024-12-06 13:31:21.913630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:15.515 [2024-12-06 13:31:21.913640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:15.515 [2024-12-06 13:31:21.913649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:15.515 [2024-12-06 13:31:21.913658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:15.515 [2024-12-06 13:31:21.913666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:15.515 [2024-12-06 13:31:21.913675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:15.515 [2024-12-06 13:31:21.913683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:15.515 [2024-12-06 13:31:21.913817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:15.515 [2024-12-06 13:31:21.913827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:15.515 [2024-12-06 13:31:21.913835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:15.515 [2024-12-06 13:31:21.913877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:15.515 [2024-12-06 13:31:21.913893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:15.515 [2024-12-06 13:31:21.913917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:15.515 [2024-12-06 13:31:21.913932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:37:15.515 [2024-12-06 13:31:21.913942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:15.515 [2024-12-06 13:31:21.913950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:15.515 [2024-12-06 13:31:21.913959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:37:15.515 [2024-12-06 13:31:21.913967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:15.515 [2024-12-06 13:31:21.913977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:15.515 [2024-12-06 13:31:21.913986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:15.515 [2024-12-06 13:31:21.914007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:15.515 [2024-12-06 13:31:21.914016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:15.515 [2024-12-06 13:31:21.914025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:15.515 [2024-12-06 13:31:21.914034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:15.515 [2024-12-06 13:31:21.914042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:15.516 [2024-12-06 13:31:21.914051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:15.516 [2024-12-06 13:31:21.914060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:15.516 [2024-12-06 13:31:21.914068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:15.516 [2024-12-06 13:31:21.914077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:37:15.516 [2024-12-06 13:31:21.914085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:15.516 [2024-12-06 13:31:21.914094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:15.516 [2024-12-06 13:31:21.914103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:37:15.516 [2024-12-06 13:31:21.914112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:15.516 [2024-12-06 13:31:21.914121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:15.516 [2024-12-06 13:31:21.914129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:37:15.516 [2024-12-06 13:31:21.914138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:15.516 [2024-12-06 13:31:21.914146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:37:15.516 [2024-12-06 13:31:21.914155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:37:15.516 [2024-12-06 13:31:21.914164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:15.516 [2024-12-06 13:31:21.914172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:37:15.516 [2024-12-06 13:31:21.914181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:37:15.516 [2024-12-06 13:31:21.914190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:15.516 [2024-12-06 13:31:21.914199] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:15.516 [2024-12-06 13:31:21.914210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:15.516 [2024-12-06 13:31:21.914219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:15.516 [2024-12-06 13:31:21.914230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:15.516 [2024-12-06 13:31:21.914239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:15.516 [2024-12-06 13:31:21.914265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:15.516 [2024-12-06 13:31:21.914288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:15.516 [2024-12-06 13:31:21.914311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:15.516 [2024-12-06 13:31:21.914320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:15.516 [2024-12-06 13:31:21.914328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:15.516 [2024-12-06 13:31:21.914339] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:15.516 [2024-12-06 13:31:21.914350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:15.516 [2024-12-06 13:31:21.914370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:37:15.516 [2024-12-06 13:31:21.914397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:37:15.516 [2024-12-06 13:31:21.914406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:37:15.516 [2024-12-06 13:31:21.914416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:37:15.516 [2024-12-06 13:31:21.914425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:37:15.516 [2024-12-06 13:31:21.914489] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:15.516 [2024-12-06 13:31:21.914499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:15.516 [2024-12-06 13:31:21.914525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:15.516 [2024-12-06 13:31:21.914535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:15.516 [2024-12-06 13:31:21.914544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:15.516 [2024-12-06 13:31:21.914554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:21.914564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:15.516 [2024-12-06 13:31:21.914574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.996 ms 00:37:15.516 [2024-12-06 13:31:21.914583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.516 [2024-12-06 13:31:21.939248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:21.939290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:15.516 [2024-12-06 13:31:21.939306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.594 ms 00:37:15.516 [2024-12-06 13:31:21.939316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.516 [2024-12-06 13:31:21.939366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:21.939380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:15.516 [2024-12-06 13:31:21.939391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:37:15.516 [2024-12-06 13:31:21.939399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.516 [2024-12-06 13:31:21.971027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:21.971085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:15.516 [2024-12-06 13:31:21.971101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.560 ms 00:37:15.516 [2024-12-06 13:31:21.971111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.516 [2024-12-06 13:31:21.971164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:21.971180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:15.516 [2024-12-06 13:31:21.971190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:15.516 [2024-12-06 13:31:21.971206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.516 [2024-12-06 13:31:21.971391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:21.971408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:15.516 [2024-12-06 13:31:21.971419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:37:15.516 [2024-12-06 13:31:21.971429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.516 [2024-12-06 13:31:21.971482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:21.971503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:15.516 [2024-12-06 13:31:21.971515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:37:15.516 [2024-12-06 13:31:21.971525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.516 [2024-12-06 13:31:21.986326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:21.986361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:15.516 [2024-12-06 13:31:21.986375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.765 ms 00:37:15.516 [2024-12-06 13:31:21.986385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.516 [2024-12-06 13:31:21.986534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:21.986571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:37:15.516 [2024-12-06 13:31:21.986599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:37:15.516 [2024-12-06 13:31:21.986624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.516 [2024-12-06 13:31:22.023179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:22.023215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:37:15.516 [2024-12-06 13:31:22.023230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.523 ms 00:37:15.516 [2024-12-06 13:31:22.023241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.516 [2024-12-06 13:31:22.033009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.516 [2024-12-06 13:31:22.033041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:15.516 [2024-12-06 13:31:22.033064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:37:15.516 [2024-12-06 13:31:22.033074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.776 [2024-12-06 13:31:22.091274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.776 [2024-12-06 13:31:22.091344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:37:15.776 [2024-12-06 13:31:22.091364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 58.135 ms 00:37:15.776 [2024-12-06 13:31:22.091373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.776 [2024-12-06 13:31:22.091530] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:37:15.776 [2024-12-06 13:31:22.091651] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:37:15.776 [2024-12-06 13:31:22.091768] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:37:15.776 [2024-12-06 13:31:22.091952] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:37:15.776 [2024-12-06 13:31:22.091970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.776 [2024-12-06 13:31:22.091981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:37:15.776 [2024-12-06 13:31:22.091993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:37:15.776 [2024-12-06 13:31:22.092003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.776 [2024-12-06 13:31:22.092111] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:37:15.776 [2024-12-06 13:31:22.092132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.776 [2024-12-06 13:31:22.092161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:37:15.776 [2024-12-06 13:31:22.092172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:37:15.776 [2024-12-06 13:31:22.092183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.776 [2024-12-06 13:31:22.107785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.776 [2024-12-06 13:31:22.107833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:37:15.776 [2024-12-06 13:31:22.107883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.574 ms 00:37:15.776 [2024-12-06 13:31:22.107895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.776 [2024-12-06 13:31:22.116916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.776 [2024-12-06 13:31:22.116949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:37:15.776 [2024-12-06 13:31:22.116962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:37:15.776 [2024-12-06 13:31:22.116972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:15.776 [2024-12-06 13:31:22.117073] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:37:15.776 [2024-12-06 13:31:22.117207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:15.776 [2024-12-06 13:31:22.117220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:37:15.776 [2024-12-06 13:31:22.117230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.137 ms 00:37:15.776 [2024-12-06 13:31:22.117239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.344 [2024-12-06 13:31:22.707747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.344 [2024-12-06 13:31:22.707890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:37:16.344 [2024-12-06 13:31:22.707914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 589.579 ms 00:37:16.344 [2024-12-06 13:31:22.707927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.344 [2024-12-06 13:31:22.712297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.344 [2024-12-06 13:31:22.712350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:37:16.344 [2024-12-06 13:31:22.712367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.123 ms 00:37:16.344 [2024-12-06 13:31:22.712378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.344 [2024-12-06 13:31:22.712934] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:37:16.344 [2024-12-06 13:31:22.712968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.344 [2024-12-06 13:31:22.712980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:37:16.344 [2024-12-06 13:31:22.712993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.517 ms 00:37:16.344 [2024-12-06 13:31:22.713019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.344 [2024-12-06 13:31:22.713058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.344 [2024-12-06 13:31:22.713075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:37:16.344 [2024-12-06 13:31:22.713086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:16.344 [2024-12-06 13:31:22.713104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.344 [2024-12-06 13:31:22.713192] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 596.092 ms, result 0 00:37:16.344 [2024-12-06 13:31:22.713243] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:37:16.344 [2024-12-06 13:31:22.713324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.344 [2024-12-06 13:31:22.713337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:37:16.344 [2024-12-06 13:31:22.713347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.082 ms 00:37:16.344 [2024-12-06 13:31:22.713373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.910 [2024-12-06 13:31:23.300921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.910 [2024-12-06 13:31:23.301030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:37:16.911 [2024-12-06 13:31:23.301096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 586.549 ms 00:37:16.911 [2024-12-06 13:31:23.301107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.305672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.305726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:37:16.911 [2024-12-06 13:31:23.305741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.113 ms 00:37:16.911 [2024-12-06 13:31:23.305753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.306356] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:37:16.911 [2024-12-06 13:31:23.306408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.306421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:37:16.911 [2024-12-06 13:31:23.306434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.615 ms 00:37:16.911 [2024-12-06 13:31:23.306444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.306516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.306533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:37:16.911 [2024-12-06 13:31:23.306545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:16.911 [2024-12-06 13:31:23.306555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.306645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 593.389 ms, result 0 00:37:16.911 [2024-12-06 13:31:23.306698] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:37:16.911 [2024-12-06 13:31:23.306714] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:37:16.911 [2024-12-06 13:31:23.306726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.306736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:37:16.911 [2024-12-06 13:31:23.306746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1189.686 ms 00:37:16.911 [2024-12-06 13:31:23.306756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.306790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.306810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:37:16.911 [2024-12-06 13:31:23.306821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:16.911 [2024-12-06 13:31:23.306831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.317874] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:16.911 [2024-12-06 13:31:23.318039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.318055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:16.911 [2024-12-06 13:31:23.318067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.188 ms 00:37:16.911 [2024-12-06 13:31:23.318077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.318774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.318801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:37:16.911 [2024-12-06 13:31:23.318818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.574 ms 00:37:16.911 [2024-12-06 13:31:23.318844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.321153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.321193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:37:16.911 [2024-12-06 13:31:23.321204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.273 ms 00:37:16.911 [2024-12-06 13:31:23.321214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.321258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.321272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:37:16.911 [2024-12-06 13:31:23.321282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:16.911 [2024-12-06 13:31:23.321297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.321404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.321421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:16.911 [2024-12-06 13:31:23.321432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:37:16.911 [2024-12-06 13:31:23.321441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.321465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.321477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:16.911 [2024-12-06 13:31:23.321488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:16.911 [2024-12-06 13:31:23.321497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.321540] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:37:16.911 [2024-12-06 13:31:23.321556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.321565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:37:16.911 [2024-12-06 13:31:23.321575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:37:16.911 [2024-12-06 13:31:23.321585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.321636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.911 [2024-12-06 13:31:23.321650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:16.911 [2024-12-06 13:31:23.321660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:37:16.911 [2024-12-06 13:31:23.321670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.911 [2024-12-06 13:31:23.322940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1440.953 ms, result 0 00:37:16.911 [2024-12-06 13:31:23.338382] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.911 [2024-12-06 13:31:23.354359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:37:16.911 [2024-12-06 13:31:23.362835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:37:17.169 Validate MD5 checksum, iteration 1 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:17.169 13:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:17.169 [2024-12-06 13:31:23.526238] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:37:17.169 [2024-12-06 13:31:23.526632] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84674 ] 00:37:17.427 [2024-12-06 13:31:23.696285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.427 [2024-12-06 13:31:23.820974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.325  [2024-12-06T13:31:26.420Z] Copying: 492/1024 [MB] (492 MBps) [2024-12-06T13:31:26.677Z] Copying: 972/1024 [MB] (480 MBps) [2024-12-06T13:31:27.612Z] Copying: 1024/1024 [MB] (average 486 MBps) 00:37:21.084 00:37:21.084 13:31:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:37:21.084 13:31:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:22.991 Validate MD5 checksum, iteration 2 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=972199def19643bc394fe844ef8d6e57 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 972199def19643bc394fe844ef8d6e57 != \9\7\2\1\9\9\d\e\f\1\9\6\4\3\b\c\3\9\4\f\e\8\4\4\e\f\8\d\6\e\5\7 ]] 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:22.991 13:31:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:22.991 [2024-12-06 13:31:29.467559] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:37:22.991 [2024-12-06 13:31:29.467724] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84740 ] 00:37:23.251 [2024-12-06 13:31:29.652324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.251 [2024-12-06 13:31:29.765705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.158  [2024-12-06T13:31:32.622Z] Copying: 498/1024 [MB] (498 MBps) [2024-12-06T13:31:32.622Z] Copying: 983/1024 [MB] (485 MBps) [2024-12-06T13:31:33.190Z] Copying: 1024/1024 [MB] (average 492 MBps) 00:37:26.662 00:37:26.920 13:31:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:37:26.921 13:31:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fe3baee20618415985fcced1e8af10ad 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fe3baee20618415985fcced1e8af10ad != \f\e\3\b\a\e\e\2\0\6\1\8\4\1\5\9\8\5\f\c\c\e\d\1\e\8\a\f\1\0\a\d ]] 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84645 ]] 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84645 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84645 ']' 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84645 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84645 00:37:28.831 killing process with pid 84645 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84645' 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84645 00:37:28.831 13:31:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84645 00:37:29.769 [2024-12-06 13:31:35.992190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:37:29.769 [2024-12-06 13:31:36.007365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.007410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:37:29.769 [2024-12-06 13:31:36.007445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:29.769 [2024-12-06 13:31:36.007455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.007484] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:37:29.769 [2024-12-06 13:31:36.010576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.010783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:37:29.769 [2024-12-06 13:31:36.010811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.072 ms 00:37:29.769 [2024-12-06 13:31:36.010823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.011112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.011134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:37:29.769 [2024-12-06 13:31:36.011147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.222 ms 00:37:29.769 [2024-12-06 13:31:36.011158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.012538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.012577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:37:29.769 [2024-12-06 13:31:36.012609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.360 ms 00:37:29.769 [2024-12-06 13:31:36.012628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.013921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.014117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:37:29.769 [2024-12-06 13:31:36.014144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.238 ms 00:37:29.769 [2024-12-06 13:31:36.014156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.024862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.025107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:37:29.769 [2024-12-06 13:31:36.025144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.650 ms 00:37:29.769 [2024-12-06 13:31:36.025159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.030975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.031012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:37:29.769 [2024-12-06 13:31:36.031044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.769 ms 00:37:29.769 [2024-12-06 13:31:36.031054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.031126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.031144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:37:29.769 [2024-12-06 13:31:36.031155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:37:29.769 [2024-12-06 13:31:36.031172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.041482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.041518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:37:29.769 [2024-12-06 13:31:36.041549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.290 ms 00:37:29.769 [2024-12-06 13:31:36.041559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.052499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.052700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:37:29.769 [2024-12-06 13:31:36.052726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.903 ms 00:37:29.769 [2024-12-06 13:31:36.052737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.062961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.063161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:37:29.769 [2024-12-06 13:31:36.063187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.164 ms 00:37:29.769 [2024-12-06 13:31:36.063199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.073495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.769 [2024-12-06 13:31:36.073530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:37:29.769 [2024-12-06 13:31:36.073561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.221 ms 00:37:29.769 [2024-12-06 13:31:36.073570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.769 [2024-12-06 13:31:36.073607] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:37:29.769 [2024-12-06 13:31:36.073627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:29.769 [2024-12-06 13:31:36.073640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:37:29.769 [2024-12-06 13:31:36.073651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:37:29.769 [2024-12-06 13:31:36.073661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:29.769 [2024-12-06 13:31:36.073819] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:37:29.769 [2024-12-06 13:31:36.073828] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d3c6f60d-c4ff-4f3c-8b6d-18447d891aed 00:37:29.769 [2024-12-06 13:31:36.073838] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:37:29.769 [2024-12-06 13:31:36.073861] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:37:29.769 [2024-12-06 13:31:36.073870] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:37:29.770 [2024-12-06 13:31:36.073880] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:37:29.770 [2024-12-06 13:31:36.073888] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:37:29.770 [2024-12-06 13:31:36.073910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:37:29.770 [2024-12-06 13:31:36.073924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:37:29.770 [2024-12-06 13:31:36.073933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:37:29.770 [2024-12-06 13:31:36.073941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:37:29.770 [2024-12-06 13:31:36.073951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.770 [2024-12-06 13:31:36.073960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:37:29.770 [2024-12-06 13:31:36.073971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.346 ms 00:37:29.770 [2024-12-06 13:31:36.073981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.087715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.770 [2024-12-06 13:31:36.087971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:37:29.770 [2024-12-06 13:31:36.088089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.712 ms 00:37:29.770 [2024-12-06 13:31:36.088138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.088620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:29.770 [2024-12-06 13:31:36.088768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:37:29.770 [2024-12-06 13:31:36.088900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:37:29.770 [2024-12-06 13:31:36.089031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.133346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.133548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:29.770 [2024-12-06 13:31:36.133660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.133717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.133786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.133990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:29.770 [2024-12-06 13:31:36.134041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.134078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.134230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.134300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:29.770 [2024-12-06 13:31:36.134337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.134371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.134491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.134542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:29.770 [2024-12-06 13:31:36.134559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.134570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.216935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.217172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:29.770 [2024-12-06 13:31:36.217216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.217230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.284769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.285026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:29.770 [2024-12-06 13:31:36.285057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.285069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.285172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.285191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:29.770 [2024-12-06 13:31:36.285202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.285213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.285287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.285354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:29.770 [2024-12-06 13:31:36.285365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.285375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.285498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.285517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:29.770 [2024-12-06 13:31:36.285529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.285539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.285617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.285633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:37:29.770 [2024-12-06 13:31:36.285650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.285659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.285699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.285713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:29.770 [2024-12-06 13:31:36.285722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.285731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.285777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:29.770 [2024-12-06 13:31:36.285798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:29.770 [2024-12-06 13:31:36.285808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:29.770 [2024-12-06 13:31:36.285818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:29.770 [2024-12-06 13:31:36.285956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 278.544 ms, result 0 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:30.709 Remove shared memory files 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:37:30.709 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84452 00:37:30.710 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:30.710 13:31:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:37:30.710 ************************************ 00:37:30.710 END TEST ftl_upgrade_shutdown 00:37:30.710 ************************************ 00:37:30.710 00:37:30.710 real 1m25.909s 00:37:30.710 user 2m2.749s 00:37:30.710 sys 0m21.693s 00:37:30.710 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.710 13:31:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:30.969 13:31:37 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:37:30.969 13:31:37 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:37:30.969 13:31:37 ftl -- ftl/ftl.sh@14 -- # killprocess 77133 00:37:30.969 Process with pid 77133 is not found 00:37:30.969 13:31:37 ftl -- common/autotest_common.sh@954 -- # '[' -z 77133 ']' 00:37:30.969 13:31:37 ftl -- common/autotest_common.sh@958 -- # kill -0 77133 00:37:30.969 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77133) - No such process 00:37:30.969 13:31:37 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77133 is not found' 00:37:30.969 13:31:37 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:37:30.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.969 13:31:37 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84850 00:37:30.969 13:31:37 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:30.969 13:31:37 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84850 00:37:30.969 13:31:37 ftl -- common/autotest_common.sh@835 -- # '[' -z 84850 ']' 00:37:30.969 13:31:37 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.969 13:31:37 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.969 13:31:37 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.969 13:31:37 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.969 13:31:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:30.969 [2024-12-06 13:31:37.340122] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:37:30.969 [2024-12-06 13:31:37.340588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84850 ] 00:37:31.228 [2024-12-06 13:31:37.508619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.228 [2024-12-06 13:31:37.601873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.795 13:31:38 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:31.795 13:31:38 ftl -- common/autotest_common.sh@868 -- # return 0 00:37:31.795 13:31:38 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:32.362 nvme0n1 00:37:32.362 13:31:38 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:37:32.362 13:31:38 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:32.362 13:31:38 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:32.362 13:31:38 ftl -- ftl/common.sh@28 -- # stores=c1441fd8-c2f8-4867-b9bf-de8af70c5430 00:37:32.362 13:31:38 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:37:32.362 13:31:38 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1441fd8-c2f8-4867-b9bf-de8af70c5430 00:37:32.621 13:31:39 ftl -- ftl/ftl.sh@23 -- # killprocess 84850 00:37:32.621 13:31:39 ftl -- common/autotest_common.sh@954 -- # '[' -z 84850 ']' 00:37:32.621 13:31:39 ftl -- common/autotest_common.sh@958 -- # kill -0 84850 00:37:32.621 13:31:39 ftl -- common/autotest_common.sh@959 -- # uname 00:37:32.621 13:31:39 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.621 13:31:39 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84850 00:37:32.621 killing process with pid 84850 00:37:32.621 13:31:39 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:32.621 13:31:39 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:32.621 13:31:39 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84850' 00:37:32.621 13:31:39 ftl -- common/autotest_common.sh@973 -- # kill 84850 00:37:32.621 13:31:39 ftl -- common/autotest_common.sh@978 -- # wait 84850 00:37:34.539 13:31:40 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:34.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:34.539 Waiting for block devices as requested 00:37:34.798 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:34.798 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:34.798 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:37:35.057 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:37:40.329 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:37:40.329 13:31:46 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:37:40.329 Remove shared memory files 00:37:40.329 13:31:46 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:40.329 13:31:46 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:37:40.329 13:31:46 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:37:40.329 13:31:46 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:37:40.329 13:31:46 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:40.329 13:31:46 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:37:40.329 ************************************ 00:37:40.329 END TEST ftl 00:37:40.329 ************************************ 00:37:40.329 00:37:40.329 real 11m41.536s 00:37:40.329 user 14m47.530s 00:37:40.329 sys 1m31.978s 00:37:40.329 13:31:46 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:40.329 13:31:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:40.330 13:31:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:40.330 13:31:46 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:40.330 13:31:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:40.330 13:31:46 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:40.330 13:31:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:40.330 13:31:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:40.330 13:31:46 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:40.330 13:31:46 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:40.330 13:31:46 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:40.330 13:31:46 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:40.330 13:31:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:40.330 13:31:46 -- common/autotest_common.sh@10 -- # set +x 00:37:40.330 13:31:46 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:40.330 13:31:46 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:40.330 13:31:46 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:40.330 13:31:46 -- common/autotest_common.sh@10 -- # set +x 00:37:41.723 INFO: APP EXITING 00:37:41.723 INFO: killing all VMs 00:37:41.723 INFO: killing vhost app 00:37:41.723 INFO: EXIT DONE 00:37:42.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:42.550 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:37:42.550 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:37:42.550 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:37:42.550 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:37:43.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:43.376 Cleaning 00:37:43.376 Removing: /var/run/dpdk/spdk0/config 00:37:43.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:43.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:43.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:43.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:43.376 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:43.376 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:43.376 Removing: /var/run/dpdk/spdk0 00:37:43.376 Removing: /var/run/dpdk/spdk_pid58160 00:37:43.376 Removing: /var/run/dpdk/spdk_pid58389 00:37:43.376 Removing: /var/run/dpdk/spdk_pid58613 00:37:43.376 Removing: /var/run/dpdk/spdk_pid58717 00:37:43.376 Removing: /var/run/dpdk/spdk_pid58773 00:37:43.376 Removing: /var/run/dpdk/spdk_pid58901 00:37:43.376 Removing: /var/run/dpdk/spdk_pid58919 00:37:43.376 Removing: /var/run/dpdk/spdk_pid59129 00:37:43.376 Removing: /var/run/dpdk/spdk_pid59238 00:37:43.376 Removing: /var/run/dpdk/spdk_pid59344 00:37:43.376 Removing: /var/run/dpdk/spdk_pid59469 00:37:43.376 Removing: /var/run/dpdk/spdk_pid59577 00:37:43.376 Removing: /var/run/dpdk/spdk_pid59617 00:37:43.376 Removing: /var/run/dpdk/spdk_pid59653 00:37:43.637 Removing: /var/run/dpdk/spdk_pid59724 00:37:43.637 Removing: /var/run/dpdk/spdk_pid59835 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60323 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60393 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60467 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60483 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60631 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60647 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60795 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60811 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60881 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60904 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60967 00:37:43.637 Removing: /var/run/dpdk/spdk_pid60986 00:37:43.637 Removing: /var/run/dpdk/spdk_pid61174 00:37:43.637 Removing: /var/run/dpdk/spdk_pid61209 00:37:43.637 Removing: /var/run/dpdk/spdk_pid61294 00:37:43.637 Removing: /var/run/dpdk/spdk_pid61482 00:37:43.637 Removing: /var/run/dpdk/spdk_pid61572 00:37:43.637 Removing: /var/run/dpdk/spdk_pid61614 00:37:43.637 Removing: /var/run/dpdk/spdk_pid62102 00:37:43.637 Removing: /var/run/dpdk/spdk_pid62206 00:37:43.637 Removing: /var/run/dpdk/spdk_pid62321 00:37:43.637 Removing: /var/run/dpdk/spdk_pid62374 00:37:43.637 Removing: /var/run/dpdk/spdk_pid62405 00:37:43.637 Removing: /var/run/dpdk/spdk_pid62489 00:37:43.637 Removing: /var/run/dpdk/spdk_pid63125 00:37:43.637 Removing: /var/run/dpdk/spdk_pid63167 00:37:43.637 Removing: /var/run/dpdk/spdk_pid63688 00:37:43.637 Removing: /var/run/dpdk/spdk_pid63792 00:37:43.637 Removing: /var/run/dpdk/spdk_pid63912 00:37:43.637 Removing: /var/run/dpdk/spdk_pid63965 00:37:43.637 Removing: /var/run/dpdk/spdk_pid63985 00:37:43.637 Removing: /var/run/dpdk/spdk_pid64016 00:37:43.637 Removing: /var/run/dpdk/spdk_pid65906 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66043 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66053 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66070 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66111 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66115 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66127 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66176 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66183 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66195 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66240 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66246 00:37:43.637 Removing: /var/run/dpdk/spdk_pid66258 00:37:43.637 Removing: /var/run/dpdk/spdk_pid67659 00:37:43.637 Removing: /var/run/dpdk/spdk_pid67773 00:37:43.637 Removing: /var/run/dpdk/spdk_pid69180 00:37:43.637 Removing: /var/run/dpdk/spdk_pid70942 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71016 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71097 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71201 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71304 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71400 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71474 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71555 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71665 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71762 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71858 00:37:43.637 Removing: /var/run/dpdk/spdk_pid71938 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72014 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72123 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72215 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72322 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72396 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72482 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72591 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72688 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72784 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72860 00:37:43.637 Removing: /var/run/dpdk/spdk_pid72941 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73016 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73090 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73199 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73290 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73387 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73467 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73548 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73618 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73699 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73808 00:37:43.637 Removing: /var/run/dpdk/spdk_pid73900 00:37:43.637 Removing: /var/run/dpdk/spdk_pid74044 00:37:43.637 Removing: /var/run/dpdk/spdk_pid74334 00:37:43.637 Removing: /var/run/dpdk/spdk_pid74369 00:37:43.912 Removing: /var/run/dpdk/spdk_pid74863 00:37:43.912 Removing: /var/run/dpdk/spdk_pid75058 00:37:43.912 Removing: /var/run/dpdk/spdk_pid75158 00:37:43.912 Removing: /var/run/dpdk/spdk_pid75269 00:37:43.912 Removing: /var/run/dpdk/spdk_pid75318 00:37:43.912 Removing: /var/run/dpdk/spdk_pid75348 00:37:43.912 Removing: /var/run/dpdk/spdk_pid75638 00:37:43.912 Removing: /var/run/dpdk/spdk_pid75703 00:37:43.912 Removing: /var/run/dpdk/spdk_pid75790 00:37:43.912 Removing: /var/run/dpdk/spdk_pid76202 00:37:43.912 Removing: /var/run/dpdk/spdk_pid76348 00:37:43.912 Removing: /var/run/dpdk/spdk_pid77133 00:37:43.912 Removing: /var/run/dpdk/spdk_pid77282 00:37:43.912 Removing: /var/run/dpdk/spdk_pid77480 00:37:43.912 Removing: /var/run/dpdk/spdk_pid77583 00:37:43.912 Removing: /var/run/dpdk/spdk_pid77963 00:37:43.912 Removing: /var/run/dpdk/spdk_pid78239 00:37:43.912 Removing: /var/run/dpdk/spdk_pid78588 00:37:43.912 Removing: /var/run/dpdk/spdk_pid78803 00:37:43.912 Removing: /var/run/dpdk/spdk_pid78930 00:37:43.912 Removing: /var/run/dpdk/spdk_pid78996 00:37:43.912 Removing: /var/run/dpdk/spdk_pid79155 00:37:43.912 Removing: /var/run/dpdk/spdk_pid79190 00:37:43.912 Removing: /var/run/dpdk/spdk_pid79255 00:37:43.912 Removing: /var/run/dpdk/spdk_pid79459 00:37:43.912 Removing: /var/run/dpdk/spdk_pid79707 00:37:43.912 Removing: /var/run/dpdk/spdk_pid80088 00:37:43.912 Removing: /var/run/dpdk/spdk_pid80522 00:37:43.912 Removing: /var/run/dpdk/spdk_pid80947 00:37:43.912 Removing: /var/run/dpdk/spdk_pid81462 00:37:43.912 Removing: /var/run/dpdk/spdk_pid81603 00:37:43.912 Removing: /var/run/dpdk/spdk_pid81710 00:37:43.912 Removing: /var/run/dpdk/spdk_pid82377 00:37:43.912 Removing: /var/run/dpdk/spdk_pid82461 00:37:43.912 Removing: /var/run/dpdk/spdk_pid82885 00:37:43.912 Removing: /var/run/dpdk/spdk_pid83320 00:37:43.912 Removing: /var/run/dpdk/spdk_pid83858 00:37:43.912 Removing: /var/run/dpdk/spdk_pid83994 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84040 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84100 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84162 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84232 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84452 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84518 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84581 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84645 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84674 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84740 00:37:43.912 Removing: /var/run/dpdk/spdk_pid84850 00:37:43.912 Clean 00:37:43.912 13:31:50 -- common/autotest_common.sh@1453 -- # return 0 00:37:43.912 13:31:50 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:43.912 13:31:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:43.912 13:31:50 -- common/autotest_common.sh@10 -- # set +x 00:37:43.912 13:31:50 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:43.912 13:31:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:43.912 13:31:50 -- common/autotest_common.sh@10 -- # set +x 00:37:44.196 13:31:50 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:44.196 13:31:50 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:44.196 13:31:50 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:44.196 13:31:50 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:44.196 13:31:50 -- spdk/autotest.sh@398 -- # hostname 00:37:44.196 13:31:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:44.196 geninfo: WARNING: invalid characters removed from testname! 00:38:10.755 13:32:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:10.755 13:32:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:12.662 13:32:18 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:15.198 13:32:21 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:17.736 13:32:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:19.640 13:32:26 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:22.175 13:32:28 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:22.175 13:32:28 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:22.175 13:32:28 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:38:22.175 13:32:28 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:22.175 13:32:28 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:22.175 13:32:28 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:22.175 + [[ -n 5293 ]] 00:38:22.175 + sudo kill 5293 00:38:22.184 [Pipeline] } 00:38:22.201 [Pipeline] // timeout 00:38:22.206 [Pipeline] } 00:38:22.221 [Pipeline] // stage 00:38:22.227 [Pipeline] } 00:38:22.242 [Pipeline] // catchError 00:38:22.252 [Pipeline] stage 00:38:22.253 [Pipeline] { (Stop VM) 00:38:22.265 [Pipeline] sh 00:38:22.543 + vagrant halt 00:38:25.830 ==> default: Halting domain... 00:38:31.124 [Pipeline] sh 00:38:31.406 + vagrant destroy -f 00:38:33.969 ==> default: Removing domain... 00:38:34.551 [Pipeline] sh 00:38:34.833 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:38:34.843 [Pipeline] } 00:38:34.858 [Pipeline] // stage 00:38:34.863 [Pipeline] } 00:38:34.879 [Pipeline] // dir 00:38:34.885 [Pipeline] } 00:38:34.900 [Pipeline] // wrap 00:38:34.906 [Pipeline] } 00:38:34.920 [Pipeline] // catchError 00:38:34.929 [Pipeline] stage 00:38:34.931 [Pipeline] { (Epilogue) 00:38:34.945 [Pipeline] sh 00:38:35.227 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:40.515 [Pipeline] catchError 00:38:40.517 [Pipeline] { 00:38:40.531 [Pipeline] sh 00:38:40.812 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:41.070 Artifacts sizes are good 00:38:41.079 [Pipeline] } 00:38:41.096 [Pipeline] // catchError 00:38:41.105 [Pipeline] archiveArtifacts 00:38:41.110 Archiving artifacts 00:38:41.224 [Pipeline] cleanWs 00:38:41.233 [WS-CLEANUP] Deleting project workspace... 00:38:41.233 [WS-CLEANUP] Deferred wipeout is used... 00:38:41.239 [WS-CLEANUP] done 00:38:41.241 [Pipeline] } 00:38:41.256 [Pipeline] // stage 00:38:41.261 [Pipeline] } 00:38:41.275 [Pipeline] // node 00:38:41.280 [Pipeline] End of Pipeline 00:38:41.390 Finished: SUCCESS