00:00:00.000 Started by upstream project "autotest-per-patch" build number 132801 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.141 Fetching changes from the remote Git repository 00:00:00.142 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.189 Using shallow fetch with depth 1 00:00:00.189 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.189 > git --version # timeout=10 00:00:00.222 > git --version # 'git version 2.39.2' 00:00:00.222 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.748 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.758 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.769 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.769 > git config core.sparsecheckout # timeout=10 00:00:06.779 > git read-tree -mu HEAD # timeout=10 00:00:06.793 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.821 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.821 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.978 [Pipeline] Start of Pipeline 00:00:06.996 [Pipeline] library 00:00:06.998 Loading library shm_lib@master 00:00:06.999 Library shm_lib@master is cached. Copying from home. 00:00:07.015 [Pipeline] node 00:00:07.028 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.030 [Pipeline] { 00:00:07.044 [Pipeline] catchError 00:00:07.046 [Pipeline] { 00:00:07.063 [Pipeline] wrap 00:00:07.074 [Pipeline] { 00:00:07.083 [Pipeline] stage 00:00:07.085 [Pipeline] { (Prologue) 00:00:07.099 [Pipeline] echo 00:00:07.100 Node: VM-host-SM38 00:00:07.106 [Pipeline] cleanWs 00:00:07.115 [WS-CLEANUP] Deleting project workspace... 00:00:07.115 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.121 [WS-CLEANUP] done 00:00:07.421 [Pipeline] setCustomBuildProperty 00:00:07.500 [Pipeline] httpRequest 00:00:08.369 [Pipeline] echo 00:00:08.371 Sorcerer 10.211.164.112 is alive 00:00:08.377 [Pipeline] retry 00:00:08.378 [Pipeline] { 00:00:08.386 [Pipeline] httpRequest 00:00:08.391 HttpMethod: GET 00:00:08.391 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.392 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.394 Response Code: HTTP/1.1 200 OK 00:00:08.394 Success: Status code 200 is in the accepted range: 200,404 00:00:08.395 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.705 [Pipeline] } 00:00:09.720 [Pipeline] // retry 00:00:09.727 [Pipeline] sh 00:00:10.015 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.031 [Pipeline] httpRequest 00:00:10.454 [Pipeline] echo 00:00:10.456 Sorcerer 10.211.164.112 is alive 00:00:10.463 [Pipeline] retry 00:00:10.465 [Pipeline] { 00:00:10.476 [Pipeline] httpRequest 00:00:10.480 HttpMethod: GET 00:00:10.481 URL: http://10.211.164.112/packages/spdk_3318278a6b7e81edb06174f0a9d84218a31af88f.tar.gz 00:00:10.482 Sending request to url: http://10.211.164.112/packages/spdk_3318278a6b7e81edb06174f0a9d84218a31af88f.tar.gz 00:00:10.497 Response Code: HTTP/1.1 200 OK 00:00:10.498 Success: Status code 200 is in the accepted range: 200,404 00:00:10.498 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_3318278a6b7e81edb06174f0a9d84218a31af88f.tar.gz 00:01:11.638 [Pipeline] } 00:01:11.654 [Pipeline] // retry 00:01:11.661 [Pipeline] sh 00:01:11.939 + tar --no-same-owner -xf spdk_3318278a6b7e81edb06174f0a9d84218a31af88f.tar.gz 00:01:14.491 [Pipeline] sh 00:01:14.769 + git -C spdk log --oneline -n5 00:01:14.769 3318278a6 vhost: check if vsession exists before remove scsi vdev 00:01:14.769 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:14.769 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:14.769 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:14.769 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:14.788 [Pipeline] writeFile 00:01:14.802 [Pipeline] sh 00:01:15.084 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:15.096 [Pipeline] sh 00:01:15.374 + cat autorun-spdk.conf 00:01:15.374 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.374 SPDK_TEST_NVME=1 00:01:15.374 SPDK_TEST_FTL=1 00:01:15.374 SPDK_TEST_ISAL=1 00:01:15.374 SPDK_RUN_ASAN=1 00:01:15.374 SPDK_RUN_UBSAN=1 00:01:15.374 SPDK_TEST_XNVME=1 00:01:15.374 SPDK_TEST_NVME_FDP=1 00:01:15.374 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.381 RUN_NIGHTLY=0 00:01:15.383 [Pipeline] } 00:01:15.396 [Pipeline] // stage 00:01:15.410 [Pipeline] stage 00:01:15.412 [Pipeline] { (Run VM) 00:01:15.425 [Pipeline] sh 00:01:15.705 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:15.705 + echo 'Start stage prepare_nvme.sh' 00:01:15.705 Start stage prepare_nvme.sh 00:01:15.705 + [[ -n 2 ]] 00:01:15.705 + disk_prefix=ex2 00:01:15.705 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:15.705 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:15.705 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:15.705 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.705 ++ SPDK_TEST_NVME=1 00:01:15.705 ++ SPDK_TEST_FTL=1 00:01:15.705 ++ SPDK_TEST_ISAL=1 00:01:15.705 ++ SPDK_RUN_ASAN=1 00:01:15.705 ++ SPDK_RUN_UBSAN=1 00:01:15.705 ++ SPDK_TEST_XNVME=1 00:01:15.705 ++ SPDK_TEST_NVME_FDP=1 00:01:15.705 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.705 ++ RUN_NIGHTLY=0 00:01:15.705 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:15.705 + nvme_files=() 00:01:15.705 + declare -A nvme_files 00:01:15.705 + backend_dir=/var/lib/libvirt/images/backends 00:01:15.705 + nvme_files['nvme.img']=5G 00:01:15.705 + nvme_files['nvme-cmb.img']=5G 00:01:15.705 + nvme_files['nvme-multi0.img']=4G 00:01:15.705 + nvme_files['nvme-multi1.img']=4G 00:01:15.705 + nvme_files['nvme-multi2.img']=4G 00:01:15.705 + nvme_files['nvme-openstack.img']=8G 00:01:15.705 + nvme_files['nvme-zns.img']=5G 00:01:15.705 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:15.705 + (( SPDK_TEST_FTL == 1 )) 00:01:15.705 + nvme_files["nvme-ftl.img"]=6G 00:01:15.705 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:15.705 + nvme_files["nvme-fdp.img"]=1G 00:01:15.705 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:15.705 + for nvme in "${!nvme_files[@]}" 00:01:15.705 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:15.705 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.705 + for nvme in "${!nvme_files[@]}" 00:01:15.705 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:15.705 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:15.705 + for nvme in "${!nvme_files[@]}" 00:01:15.705 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:15.705 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.705 + for nvme in "${!nvme_files[@]}" 00:01:15.705 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:15.967 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:15.967 + for nvme in "${!nvme_files[@]}" 00:01:15.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:15.967 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.967 + for nvme in "${!nvme_files[@]}" 00:01:15.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:15.967 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.967 + for nvme in "${!nvme_files[@]}" 00:01:15.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:15.967 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.967 + for nvme in "${!nvme_files[@]}" 00:01:15.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:15.967 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:15.967 + for nvme in "${!nvme_files[@]}" 00:01:15.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:16.538 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.538 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:16.538 + echo 'End stage prepare_nvme.sh' 00:01:16.538 End stage prepare_nvme.sh 00:01:16.551 [Pipeline] sh 00:01:16.833 + DISTRO=fedora39 00:01:16.833 + CPUS=10 00:01:16.833 + RAM=12288 00:01:16.833 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:16.833 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:16.833 00:01:16.833 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:16.833 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:16.834 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:16.834 HELP=0 00:01:16.834 DRY_RUN=0 00:01:16.834 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:16.834 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:16.834 NVME_AUTO_CREATE=0 00:01:16.834 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:16.834 NVME_CMB=,,,, 00:01:16.834 NVME_PMR=,,,, 00:01:16.834 NVME_ZNS=,,,, 00:01:16.834 NVME_MS=true,,,, 00:01:16.834 NVME_FDP=,,,on, 00:01:16.834 SPDK_VAGRANT_DISTRO=fedora39 00:01:16.834 SPDK_VAGRANT_VMCPU=10 00:01:16.834 SPDK_VAGRANT_VMRAM=12288 00:01:16.834 SPDK_VAGRANT_PROVIDER=libvirt 00:01:16.834 SPDK_VAGRANT_HTTP_PROXY= 00:01:16.834 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:16.834 SPDK_OPENSTACK_NETWORK=0 00:01:16.834 VAGRANT_PACKAGE_BOX=0 00:01:16.834 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:16.834 FORCE_DISTRO=true 00:01:16.834 VAGRANT_BOX_VERSION= 00:01:16.834 EXTRA_VAGRANTFILES= 00:01:16.834 NIC_MODEL=e1000 00:01:16.834 00:01:16.834 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:16.834 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:19.372 Bringing machine 'default' up with 'libvirt' provider... 00:01:19.632 ==> default: Creating image (snapshot of base box volume). 00:01:19.632 ==> default: Creating domain with the following settings... 00:01:19.632 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733752461_2aa11200078e3a585358 00:01:19.632 ==> default: -- Domain type: kvm 00:01:19.632 ==> default: -- Cpus: 10 00:01:19.632 ==> default: -- Feature: acpi 00:01:19.632 ==> default: -- Feature: apic 00:01:19.632 ==> default: -- Feature: pae 00:01:19.632 ==> default: -- Memory: 12288M 00:01:19.632 ==> default: -- Memory Backing: hugepages: 00:01:19.632 ==> default: -- Management MAC: 00:01:19.632 ==> default: -- Loader: 00:01:19.632 ==> default: -- Nvram: 00:01:19.632 ==> default: -- Base box: spdk/fedora39 00:01:19.632 ==> default: -- Storage pool: default 00:01:19.632 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733752461_2aa11200078e3a585358.img (20G) 00:01:19.632 ==> default: -- Volume Cache: default 00:01:19.632 ==> default: -- Kernel: 00:01:19.632 ==> default: -- Initrd: 00:01:19.632 ==> default: -- Graphics Type: vnc 00:01:19.632 ==> default: -- Graphics Port: -1 00:01:19.632 ==> default: -- Graphics IP: 127.0.0.1 00:01:19.632 ==> default: -- Graphics Password: Not defined 00:01:19.632 ==> default: -- Video Type: cirrus 00:01:19.632 ==> default: -- Video VRAM: 9216 00:01:19.632 ==> default: -- Sound Type: 00:01:19.632 ==> default: -- Keymap: en-us 00:01:19.632 ==> default: -- TPM Path: 00:01:19.632 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:19.632 ==> default: -- Command line args: 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:19.632 ==> default: -> value=-drive, 00:01:19.632 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:19.632 ==> default: -> value=-drive, 00:01:19.632 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:19.632 ==> default: -> value=-drive, 00:01:19.632 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.632 ==> default: -> value=-drive, 00:01:19.632 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.632 ==> default: -> value=-drive, 00:01:19.632 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:19.632 ==> default: -> value=-drive, 00:01:19.632 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:19.632 ==> default: -> value=-device, 00:01:19.632 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.893 ==> default: Creating shared folders metadata... 00:01:19.894 ==> default: Starting domain. 00:01:21.809 ==> default: Waiting for domain to get an IP address... 00:01:43.777 ==> default: Waiting for SSH to become available... 00:01:43.777 ==> default: Configuring and enabling network interfaces... 00:01:47.099 default: SSH address: 192.168.121.227:22 00:01:47.099 default: SSH username: vagrant 00:01:47.099 default: SSH auth method: private key 00:01:48.586 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:58.589 ==> default: Mounting SSHFS shared folder... 00:01:59.163 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:59.163 ==> default: Checking Mount.. 00:02:00.550 ==> default: Folder Successfully Mounted! 00:02:00.550 00:02:00.550 SUCCESS! 00:02:00.550 00:02:00.550 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.550 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.550 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.550 00:02:00.560 [Pipeline] } 00:02:00.575 [Pipeline] // stage 00:02:00.583 [Pipeline] dir 00:02:00.584 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:00.586 [Pipeline] { 00:02:00.599 [Pipeline] catchError 00:02:00.600 [Pipeline] { 00:02:00.612 [Pipeline] sh 00:02:00.895 + vagrant ssh-config --host vagrant 00:02:00.895 + sed -ne '/^Host/,$p' 00:02:00.895 + tee ssh_conf 00:02:04.199 Host vagrant 00:02:04.199 HostName 192.168.121.227 00:02:04.199 User vagrant 00:02:04.199 Port 22 00:02:04.199 UserKnownHostsFile /dev/null 00:02:04.199 StrictHostKeyChecking no 00:02:04.199 PasswordAuthentication no 00:02:04.199 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:04.199 IdentitiesOnly yes 00:02:04.199 LogLevel FATAL 00:02:04.199 ForwardAgent yes 00:02:04.199 ForwardX11 yes 00:02:04.199 00:02:04.215 [Pipeline] withEnv 00:02:04.217 [Pipeline] { 00:02:04.230 [Pipeline] sh 00:02:04.514 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:04.514 source /etc/os-release 00:02:04.514 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.514 # Minimal, systemd-like check. 00:02:04.514 if [[ -e /.dockerenv ]]; then 00:02:04.514 # Clear garbage from the node'\''s name: 00:02:04.514 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.514 # $HOSTNAME is the actual container id 00:02:04.514 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.514 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.514 # We can assume this is a mount from a host where container is running, 00:02:04.514 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.514 container="$(< /etc/hostname) ($agent)" 00:02:04.514 else 00:02:04.514 # Fallback 00:02:04.514 container=$agent 00:02:04.514 fi 00:02:04.514 fi 00:02:04.514 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.514 ' 00:02:04.790 [Pipeline] } 00:02:04.806 [Pipeline] // withEnv 00:02:04.815 [Pipeline] setCustomBuildProperty 00:02:04.829 [Pipeline] stage 00:02:04.831 [Pipeline] { (Tests) 00:02:04.847 [Pipeline] sh 00:02:05.132 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.408 [Pipeline] sh 00:02:05.693 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:05.971 [Pipeline] timeout 00:02:05.971 Timeout set to expire in 50 min 00:02:05.973 [Pipeline] { 00:02:05.988 [Pipeline] sh 00:02:06.272 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:06.845 HEAD is now at 3318278a6 vhost: check if vsession exists before remove scsi vdev 00:02:06.859 [Pipeline] sh 00:02:07.144 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:07.421 [Pipeline] sh 00:02:07.707 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:07.985 [Pipeline] sh 00:02:08.273 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:08.535 ++ readlink -f spdk_repo 00:02:08.535 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.535 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.535 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.535 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.535 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.535 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.535 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.535 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:08.535 + cd /home/vagrant/spdk_repo 00:02:08.535 + source /etc/os-release 00:02:08.535 ++ NAME='Fedora Linux' 00:02:08.535 ++ VERSION='39 (Cloud Edition)' 00:02:08.535 ++ ID=fedora 00:02:08.535 ++ VERSION_ID=39 00:02:08.535 ++ VERSION_CODENAME= 00:02:08.535 ++ PLATFORM_ID=platform:f39 00:02:08.535 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.535 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.535 ++ LOGO=fedora-logo-icon 00:02:08.535 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.535 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.535 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.535 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.535 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.535 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.535 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.535 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.535 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.535 ++ SUPPORT_END=2024-11-12 00:02:08.535 ++ VARIANT='Cloud Edition' 00:02:08.535 ++ VARIANT_ID=cloud 00:02:08.535 + uname -a 00:02:08.535 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:08.535 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:08.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:09.058 Hugepages 00:02:09.058 node hugesize free / total 00:02:09.058 node0 1048576kB 0 / 0 00:02:09.058 node0 2048kB 0 / 0 00:02:09.058 00:02:09.058 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.058 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.320 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:09.320 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.320 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:02:09.320 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:09.320 + rm -f /tmp/spdk-ld-path 00:02:09.320 + source autorun-spdk.conf 00:02:09.320 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.320 ++ SPDK_TEST_NVME=1 00:02:09.320 ++ SPDK_TEST_FTL=1 00:02:09.320 ++ SPDK_TEST_ISAL=1 00:02:09.320 ++ SPDK_RUN_ASAN=1 00:02:09.320 ++ SPDK_RUN_UBSAN=1 00:02:09.320 ++ SPDK_TEST_XNVME=1 00:02:09.320 ++ SPDK_TEST_NVME_FDP=1 00:02:09.320 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.320 ++ RUN_NIGHTLY=0 00:02:09.320 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.320 + [[ -n '' ]] 00:02:09.320 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.320 + for M in /var/spdk/build-*-manifest.txt 00:02:09.320 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:09.320 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.320 + for M in /var/spdk/build-*-manifest.txt 00:02:09.320 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.320 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.320 + for M in /var/spdk/build-*-manifest.txt 00:02:09.320 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.320 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.320 ++ uname 00:02:09.320 + [[ Linux == \L\i\n\u\x ]] 00:02:09.320 + sudo dmesg -T 00:02:09.320 + sudo dmesg --clear 00:02:09.320 + dmesg_pid=5027 00:02:09.320 + [[ Fedora Linux == FreeBSD ]] 00:02:09.320 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.320 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.320 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:09.320 + [[ -x /usr/src/fio-static/fio ]] 00:02:09.320 + sudo dmesg -Tw 00:02:09.320 + export FIO_BIN=/usr/src/fio-static/fio 00:02:09.320 + FIO_BIN=/usr/src/fio-static/fio 00:02:09.320 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:09.320 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:09.320 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:09.320 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.320 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.320 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:09.320 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.320 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.320 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.582 13:55:11 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:09.582 13:55:11 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.582 13:55:11 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.582 13:55:11 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:09.582 13:55:11 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:09.582 13:55:11 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:09.582 13:55:11 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:09.582 13:55:11 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:09.582 13:55:11 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:09.582 13:55:11 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:09.582 13:55:11 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.582 13:55:11 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:09.582 13:55:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:09.582 13:55:11 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.582 13:55:11 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:09.582 13:55:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:09.582 13:55:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:09.582 13:55:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.582 13:55:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.582 13:55:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.582 13:55:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.582 13:55:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.582 13:55:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.582 13:55:11 -- paths/export.sh@5 -- $ export PATH 00:02:09.582 13:55:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.582 13:55:11 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:09.582 13:55:11 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:09.582 13:55:11 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733752511.XXXXXX 00:02:09.582 13:55:11 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733752511.INRDXe 00:02:09.582 13:55:11 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:09.582 13:55:11 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:09.582 13:55:11 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:09.582 13:55:11 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:09.582 13:55:11 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.582 13:55:11 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:09.582 13:55:11 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:09.582 13:55:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.582 13:55:11 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:09.582 13:55:11 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:09.582 13:55:11 -- pm/common@17 -- $ local monitor 00:02:09.582 13:55:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.582 13:55:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.582 13:55:11 -- pm/common@25 -- $ sleep 1 00:02:09.582 13:55:11 -- pm/common@21 -- $ date +%s 00:02:09.582 13:55:11 -- pm/common@21 -- $ date +%s 00:02:09.582 13:55:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733752511 00:02:09.582 13:55:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733752511 00:02:09.582 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733752511_collect-cpu-load.pm.log 00:02:09.582 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733752511_collect-vmstat.pm.log 00:02:10.526 13:55:12 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:10.526 13:55:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.526 13:55:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.526 13:55:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.526 13:55:12 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.526 Mon Dec 9 01:55:12 PM UTC 2024 00:02:10.526 13:55:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.526 v25.01-pre-312-g3318278a6 00:02:10.526 13:55:12 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:10.526 13:55:12 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:10.526 13:55:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:10.526 13:55:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:10.526 13:55:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.526 ************************************ 00:02:10.526 START TEST asan 00:02:10.526 ************************************ 00:02:10.526 using asan 00:02:10.526 13:55:12 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:10.526 00:02:10.526 real 0m0.000s 00:02:10.526 user 0m0.000s 00:02:10.526 sys 0m0.000s 00:02:10.526 13:55:12 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:10.526 13:55:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.526 ************************************ 00:02:10.526 END TEST asan 00:02:10.526 ************************************ 00:02:10.787 13:55:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.787 13:55:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.787 13:55:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:10.787 13:55:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:10.788 13:55:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.788 ************************************ 00:02:10.788 START TEST ubsan 00:02:10.788 ************************************ 00:02:10.788 using ubsan 00:02:10.788 13:55:12 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:10.788 00:02:10.788 real 0m0.000s 00:02:10.788 user 0m0.000s 00:02:10.788 sys 0m0.000s 00:02:10.788 13:55:12 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:10.788 ************************************ 00:02:10.788 END TEST ubsan 00:02:10.788 ************************************ 00:02:10.788 13:55:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.788 13:55:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:10.788 13:55:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.788 13:55:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.788 13:55:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:10.788 13:55:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:10.788 13:55:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:10.788 13:55:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:10.788 13:55:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:10.788 13:55:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:10.788 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:10.788 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.360 Using 'verbs' RDMA provider 00:02:22.302 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:32.279 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:32.537 Creating mk/config.mk...done. 00:02:32.537 Creating mk/cc.flags.mk...done. 00:02:32.537 Type 'make' to build. 00:02:32.537 13:55:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:32.537 13:55:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:32.537 13:55:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:32.537 13:55:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.537 ************************************ 00:02:32.537 START TEST make 00:02:32.537 ************************************ 00:02:32.537 13:55:34 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:32.818 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:32.818 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:32.818 meson setup builddir \ 00:02:32.818 -Dwith-libaio=enabled \ 00:02:32.818 -Dwith-liburing=enabled \ 00:02:32.818 -Dwith-libvfn=disabled \ 00:02:32.818 -Dwith-spdk=disabled \ 00:02:32.818 -Dexamples=false \ 00:02:32.818 -Dtests=false \ 00:02:32.818 -Dtools=false && \ 00:02:32.818 meson compile -C builddir && \ 00:02:32.818 cd -) 00:02:32.818 make[1]: Nothing to be done for 'all'. 00:02:34.762 The Meson build system 00:02:34.762 Version: 1.5.0 00:02:34.762 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:34.762 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:34.762 Build type: native build 00:02:34.762 Project name: xnvme 00:02:34.762 Project version: 0.7.5 00:02:34.762 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.762 C linker for the host machine: cc ld.bfd 2.40-14 00:02:34.762 Host machine cpu family: x86_64 00:02:34.762 Host machine cpu: x86_64 00:02:34.762 Message: host_machine.system: linux 00:02:34.762 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:34.762 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:34.762 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:34.762 Run-time dependency threads found: YES 00:02:34.762 Has header "setupapi.h" : NO 00:02:34.762 Has header "linux/blkzoned.h" : YES 00:02:34.762 Has header "linux/blkzoned.h" : YES (cached) 00:02:34.762 Has header "libaio.h" : YES 00:02:34.762 Library aio found: YES 00:02:34.762 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.762 Run-time dependency liburing found: YES 2.2 00:02:34.762 Dependency libvfn skipped: feature with-libvfn disabled 00:02:34.762 Found CMake: /usr/bin/cmake (3.27.7) 00:02:34.762 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:34.762 Subproject spdk : skipped: feature with-spdk disabled 00:02:34.762 Run-time dependency appleframeworks found: NO (tried framework) 00:02:34.762 Run-time dependency appleframeworks found: NO (tried framework) 00:02:34.762 Library rt found: YES 00:02:34.762 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:34.762 Configuring xnvme_config.h using configuration 00:02:34.762 Configuring xnvme.spec using configuration 00:02:34.762 Run-time dependency bash-completion found: YES 2.11 00:02:34.762 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:34.762 Program cp found: YES (/usr/bin/cp) 00:02:34.762 Build targets in project: 3 00:02:34.762 00:02:34.762 xnvme 0.7.5 00:02:34.762 00:02:34.762 Subprojects 00:02:34.762 spdk : NO Feature 'with-spdk' disabled 00:02:34.762 00:02:34.762 User defined options 00:02:34.762 examples : false 00:02:34.762 tests : false 00:02:34.762 tools : false 00:02:34.762 with-libaio : enabled 00:02:34.762 with-liburing: enabled 00:02:34.762 with-libvfn : disabled 00:02:34.762 with-spdk : disabled 00:02:34.762 00:02:34.762 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.021 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:35.021 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:35.279 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:35.279 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:35.279 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:35.279 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:35.279 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:35.279 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:35.279 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:35.279 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:35.279 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:35.279 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:35.279 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:35.279 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:35.279 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:35.279 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:35.279 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:35.279 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:35.279 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:35.279 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:35.279 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:35.279 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:35.279 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:35.279 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:35.279 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:35.279 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:35.279 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:35.279 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:35.279 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:35.279 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:35.279 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:35.279 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:35.279 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:35.279 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:35.279 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:35.538 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:35.538 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:35.538 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:35.538 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:35.538 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:35.538 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:35.538 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:35.538 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:35.538 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:35.538 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:35.538 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:35.538 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:35.538 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:35.538 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:35.538 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:35.538 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:35.538 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:35.538 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:35.538 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:35.538 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:35.538 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:35.538 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:35.538 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:35.538 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:35.538 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:35.538 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:35.538 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:35.538 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:35.538 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:35.538 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:35.538 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:35.796 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:35.796 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:35.796 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:35.796 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:35.796 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:35.796 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:35.796 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:35.796 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:36.054 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:36.054 [75/76] Linking static target lib/libxnvme.a 00:02:36.054 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:36.054 INFO: autodetecting backend as ninja 00:02:36.054 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:36.312 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:42.872 The Meson build system 00:02:42.873 Version: 1.5.0 00:02:42.873 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:42.873 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:42.873 Build type: native build 00:02:42.873 Program cat found: YES (/usr/bin/cat) 00:02:42.873 Project name: DPDK 00:02:42.873 Project version: 24.03.0 00:02:42.873 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.873 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.873 Host machine cpu family: x86_64 00:02:42.873 Host machine cpu: x86_64 00:02:42.873 Message: ## Building in Developer Mode ## 00:02:42.873 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.873 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:42.873 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.873 Program python3 found: YES (/usr/bin/python3) 00:02:42.873 Program cat found: YES (/usr/bin/cat) 00:02:42.873 Compiler for C supports arguments -march=native: YES 00:02:42.873 Checking for size of "void *" : 8 00:02:42.873 Checking for size of "void *" : 8 (cached) 00:02:42.873 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:42.873 Library m found: YES 00:02:42.873 Library numa found: YES 00:02:42.873 Has header "numaif.h" : YES 00:02:42.873 Library fdt found: NO 00:02:42.873 Library execinfo found: NO 00:02:42.873 Has header "execinfo.h" : YES 00:02:42.873 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.873 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.873 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.873 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.873 Run-time dependency openssl found: YES 3.1.1 00:02:42.873 Run-time dependency libpcap found: YES 1.10.4 00:02:42.873 Has header "pcap.h" with dependency libpcap: YES 00:02:42.873 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.873 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.873 Compiler for C supports arguments -Wformat: YES 00:02:42.873 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.873 Compiler for C supports arguments -Wformat-security: NO 00:02:42.873 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.873 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.873 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.873 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.873 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.873 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.873 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.873 Compiler for C supports arguments -Wundef: YES 00:02:42.873 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.873 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.873 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.873 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.873 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.873 Program objdump found: YES (/usr/bin/objdump) 00:02:42.873 Compiler for C supports arguments -mavx512f: YES 00:02:42.873 Checking if "AVX512 checking" compiles: YES 00:02:42.873 Fetching value of define "__SSE4_2__" : 1 00:02:42.873 Fetching value of define "__AES__" : 1 00:02:42.873 Fetching value of define "__AVX__" : 1 00:02:42.873 Fetching value of define "__AVX2__" : 1 00:02:42.873 Fetching value of define "__AVX512BW__" : 1 00:02:42.873 Fetching value of define "__AVX512CD__" : 1 00:02:42.873 Fetching value of define "__AVX512DQ__" : 1 00:02:42.873 Fetching value of define "__AVX512F__" : 1 00:02:42.873 Fetching value of define "__AVX512VL__" : 1 00:02:42.873 Fetching value of define "__PCLMUL__" : 1 00:02:42.873 Fetching value of define "__RDRND__" : 1 00:02:42.873 Fetching value of define "__RDSEED__" : 1 00:02:42.873 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:42.873 Fetching value of define "__znver1__" : (undefined) 00:02:42.873 Fetching value of define "__znver2__" : (undefined) 00:02:42.873 Fetching value of define "__znver3__" : (undefined) 00:02:42.873 Fetching value of define "__znver4__" : (undefined) 00:02:42.873 Library asan found: YES 00:02:42.873 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.873 Message: lib/log: Defining dependency "log" 00:02:42.873 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.873 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.873 Library rt found: YES 00:02:42.873 Checking for function "getentropy" : NO 00:02:42.873 Message: lib/eal: Defining dependency "eal" 00:02:42.873 Message: lib/ring: Defining dependency "ring" 00:02:42.873 Message: lib/rcu: Defining dependency "rcu" 00:02:42.873 Message: lib/mempool: Defining dependency "mempool" 00:02:42.873 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.873 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.873 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:42.873 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:42.873 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:42.873 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:42.873 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:42.873 Compiler for C supports arguments -mpclmul: YES 00:02:42.873 Compiler for C supports arguments -maes: YES 00:02:42.873 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.873 Compiler for C supports arguments -mavx512bw: YES 00:02:42.873 Compiler for C supports arguments -mavx512dq: YES 00:02:42.873 Compiler for C supports arguments -mavx512vl: YES 00:02:42.873 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.873 Compiler for C supports arguments -mavx2: YES 00:02:42.873 Compiler for C supports arguments -mavx: YES 00:02:42.873 Message: lib/net: Defining dependency "net" 00:02:42.873 Message: lib/meter: Defining dependency "meter" 00:02:42.873 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.873 Message: lib/pci: Defining dependency "pci" 00:02:42.873 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.873 Message: lib/hash: Defining dependency "hash" 00:02:42.873 Message: lib/timer: Defining dependency "timer" 00:02:42.873 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.873 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.873 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.873 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.873 Message: lib/power: Defining dependency "power" 00:02:42.873 Message: lib/reorder: Defining dependency "reorder" 00:02:42.873 Message: lib/security: Defining dependency "security" 00:02:42.873 Has header "linux/userfaultfd.h" : YES 00:02:42.873 Has header "linux/vduse.h" : YES 00:02:42.873 Message: lib/vhost: Defining dependency "vhost" 00:02:42.873 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:42.873 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:42.873 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:42.873 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:42.873 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:42.873 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:42.873 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:42.873 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:42.873 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:42.873 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:42.873 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:42.873 Configuring doxy-api-html.conf using configuration 00:02:42.873 Configuring doxy-api-man.conf using configuration 00:02:42.873 Program mandb found: YES (/usr/bin/mandb) 00:02:42.873 Program sphinx-build found: NO 00:02:42.873 Configuring rte_build_config.h using configuration 00:02:42.873 Message: 00:02:42.873 ================= 00:02:42.873 Applications Enabled 00:02:42.873 ================= 00:02:42.873 00:02:42.873 apps: 00:02:42.873 00:02:42.873 00:02:42.873 Message: 00:02:42.873 ================= 00:02:42.873 Libraries Enabled 00:02:42.873 ================= 00:02:42.873 00:02:42.873 libs: 00:02:42.873 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:42.873 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:42.873 cryptodev, dmadev, power, reorder, security, vhost, 00:02:42.873 00:02:42.873 Message: 00:02:42.873 =============== 00:02:42.873 Drivers Enabled 00:02:42.873 =============== 00:02:42.873 00:02:42.873 common: 00:02:42.873 00:02:42.873 bus: 00:02:42.873 pci, vdev, 00:02:42.873 mempool: 00:02:42.873 ring, 00:02:42.873 dma: 00:02:42.873 00:02:42.873 net: 00:02:42.873 00:02:42.873 crypto: 00:02:42.873 00:02:42.873 compress: 00:02:42.873 00:02:42.873 vdpa: 00:02:42.873 00:02:42.873 00:02:42.873 Message: 00:02:42.873 ================= 00:02:42.873 Content Skipped 00:02:42.873 ================= 00:02:42.873 00:02:42.873 apps: 00:02:42.873 dumpcap: explicitly disabled via build config 00:02:42.873 graph: explicitly disabled via build config 00:02:42.873 pdump: explicitly disabled via build config 00:02:42.873 proc-info: explicitly disabled via build config 00:02:42.873 test-acl: explicitly disabled via build config 00:02:42.873 test-bbdev: explicitly disabled via build config 00:02:42.873 test-cmdline: explicitly disabled via build config 00:02:42.873 test-compress-perf: explicitly disabled via build config 00:02:42.873 test-crypto-perf: explicitly disabled via build config 00:02:42.873 test-dma-perf: explicitly disabled via build config 00:02:42.873 test-eventdev: explicitly disabled via build config 00:02:42.873 test-fib: explicitly disabled via build config 00:02:42.873 test-flow-perf: explicitly disabled via build config 00:02:42.873 test-gpudev: explicitly disabled via build config 00:02:42.873 test-mldev: explicitly disabled via build config 00:02:42.873 test-pipeline: explicitly disabled via build config 00:02:42.873 test-pmd: explicitly disabled via build config 00:02:42.873 test-regex: explicitly disabled via build config 00:02:42.873 test-sad: explicitly disabled via build config 00:02:42.873 test-security-perf: explicitly disabled via build config 00:02:42.873 00:02:42.873 libs: 00:02:42.873 argparse: explicitly disabled via build config 00:02:42.873 metrics: explicitly disabled via build config 00:02:42.873 acl: explicitly disabled via build config 00:02:42.873 bbdev: explicitly disabled via build config 00:02:42.873 bitratestats: explicitly disabled via build config 00:02:42.874 bpf: explicitly disabled via build config 00:02:42.874 cfgfile: explicitly disabled via build config 00:02:42.874 distributor: explicitly disabled via build config 00:02:42.874 efd: explicitly disabled via build config 00:02:42.874 eventdev: explicitly disabled via build config 00:02:42.874 dispatcher: explicitly disabled via build config 00:02:42.874 gpudev: explicitly disabled via build config 00:02:42.874 gro: explicitly disabled via build config 00:02:42.874 gso: explicitly disabled via build config 00:02:42.874 ip_frag: explicitly disabled via build config 00:02:42.874 jobstats: explicitly disabled via build config 00:02:42.874 latencystats: explicitly disabled via build config 00:02:42.874 lpm: explicitly disabled via build config 00:02:42.874 member: explicitly disabled via build config 00:02:42.874 pcapng: explicitly disabled via build config 00:02:42.874 rawdev: explicitly disabled via build config 00:02:42.874 regexdev: explicitly disabled via build config 00:02:42.874 mldev: explicitly disabled via build config 00:02:42.874 rib: explicitly disabled via build config 00:02:42.874 sched: explicitly disabled via build config 00:02:42.874 stack: explicitly disabled via build config 00:02:42.874 ipsec: explicitly disabled via build config 00:02:42.874 pdcp: explicitly disabled via build config 00:02:42.874 fib: explicitly disabled via build config 00:02:42.874 port: explicitly disabled via build config 00:02:42.874 pdump: explicitly disabled via build config 00:02:42.874 table: explicitly disabled via build config 00:02:42.874 pipeline: explicitly disabled via build config 00:02:42.874 graph: explicitly disabled via build config 00:02:42.874 node: explicitly disabled via build config 00:02:42.874 00:02:42.874 drivers: 00:02:42.874 common/cpt: not in enabled drivers build config 00:02:42.874 common/dpaax: not in enabled drivers build config 00:02:42.874 common/iavf: not in enabled drivers build config 00:02:42.874 common/idpf: not in enabled drivers build config 00:02:42.874 common/ionic: not in enabled drivers build config 00:02:42.874 common/mvep: not in enabled drivers build config 00:02:42.874 common/octeontx: not in enabled drivers build config 00:02:42.874 bus/auxiliary: not in enabled drivers build config 00:02:42.874 bus/cdx: not in enabled drivers build config 00:02:42.874 bus/dpaa: not in enabled drivers build config 00:02:42.874 bus/fslmc: not in enabled drivers build config 00:02:42.874 bus/ifpga: not in enabled drivers build config 00:02:42.874 bus/platform: not in enabled drivers build config 00:02:42.874 bus/uacce: not in enabled drivers build config 00:02:42.874 bus/vmbus: not in enabled drivers build config 00:02:42.874 common/cnxk: not in enabled drivers build config 00:02:42.874 common/mlx5: not in enabled drivers build config 00:02:42.874 common/nfp: not in enabled drivers build config 00:02:42.874 common/nitrox: not in enabled drivers build config 00:02:42.874 common/qat: not in enabled drivers build config 00:02:42.874 common/sfc_efx: not in enabled drivers build config 00:02:42.874 mempool/bucket: not in enabled drivers build config 00:02:42.874 mempool/cnxk: not in enabled drivers build config 00:02:42.874 mempool/dpaa: not in enabled drivers build config 00:02:42.874 mempool/dpaa2: not in enabled drivers build config 00:02:42.874 mempool/octeontx: not in enabled drivers build config 00:02:42.874 mempool/stack: not in enabled drivers build config 00:02:42.874 dma/cnxk: not in enabled drivers build config 00:02:42.874 dma/dpaa: not in enabled drivers build config 00:02:42.874 dma/dpaa2: not in enabled drivers build config 00:02:42.874 dma/hisilicon: not in enabled drivers build config 00:02:42.874 dma/idxd: not in enabled drivers build config 00:02:42.874 dma/ioat: not in enabled drivers build config 00:02:42.874 dma/skeleton: not in enabled drivers build config 00:02:42.874 net/af_packet: not in enabled drivers build config 00:02:42.874 net/af_xdp: not in enabled drivers build config 00:02:42.874 net/ark: not in enabled drivers build config 00:02:42.874 net/atlantic: not in enabled drivers build config 00:02:42.874 net/avp: not in enabled drivers build config 00:02:42.874 net/axgbe: not in enabled drivers build config 00:02:42.874 net/bnx2x: not in enabled drivers build config 00:02:42.874 net/bnxt: not in enabled drivers build config 00:02:42.874 net/bonding: not in enabled drivers build config 00:02:42.874 net/cnxk: not in enabled drivers build config 00:02:42.874 net/cpfl: not in enabled drivers build config 00:02:42.874 net/cxgbe: not in enabled drivers build config 00:02:42.874 net/dpaa: not in enabled drivers build config 00:02:42.874 net/dpaa2: not in enabled drivers build config 00:02:42.874 net/e1000: not in enabled drivers build config 00:02:42.874 net/ena: not in enabled drivers build config 00:02:42.874 net/enetc: not in enabled drivers build config 00:02:42.874 net/enetfec: not in enabled drivers build config 00:02:42.874 net/enic: not in enabled drivers build config 00:02:42.874 net/failsafe: not in enabled drivers build config 00:02:42.874 net/fm10k: not in enabled drivers build config 00:02:42.874 net/gve: not in enabled drivers build config 00:02:42.874 net/hinic: not in enabled drivers build config 00:02:42.874 net/hns3: not in enabled drivers build config 00:02:42.874 net/i40e: not in enabled drivers build config 00:02:42.874 net/iavf: not in enabled drivers build config 00:02:42.874 net/ice: not in enabled drivers build config 00:02:42.874 net/idpf: not in enabled drivers build config 00:02:42.874 net/igc: not in enabled drivers build config 00:02:42.874 net/ionic: not in enabled drivers build config 00:02:42.874 net/ipn3ke: not in enabled drivers build config 00:02:42.874 net/ixgbe: not in enabled drivers build config 00:02:42.874 net/mana: not in enabled drivers build config 00:02:42.874 net/memif: not in enabled drivers build config 00:02:42.874 net/mlx4: not in enabled drivers build config 00:02:42.874 net/mlx5: not in enabled drivers build config 00:02:42.874 net/mvneta: not in enabled drivers build config 00:02:42.874 net/mvpp2: not in enabled drivers build config 00:02:42.874 net/netvsc: not in enabled drivers build config 00:02:42.874 net/nfb: not in enabled drivers build config 00:02:42.874 net/nfp: not in enabled drivers build config 00:02:42.874 net/ngbe: not in enabled drivers build config 00:02:42.874 net/null: not in enabled drivers build config 00:02:42.874 net/octeontx: not in enabled drivers build config 00:02:42.874 net/octeon_ep: not in enabled drivers build config 00:02:42.874 net/pcap: not in enabled drivers build config 00:02:42.874 net/pfe: not in enabled drivers build config 00:02:42.874 net/qede: not in enabled drivers build config 00:02:42.874 net/ring: not in enabled drivers build config 00:02:42.874 net/sfc: not in enabled drivers build config 00:02:42.874 net/softnic: not in enabled drivers build config 00:02:42.874 net/tap: not in enabled drivers build config 00:02:42.874 net/thunderx: not in enabled drivers build config 00:02:42.874 net/txgbe: not in enabled drivers build config 00:02:42.874 net/vdev_netvsc: not in enabled drivers build config 00:02:42.874 net/vhost: not in enabled drivers build config 00:02:42.874 net/virtio: not in enabled drivers build config 00:02:42.874 net/vmxnet3: not in enabled drivers build config 00:02:42.874 raw/*: missing internal dependency, "rawdev" 00:02:42.874 crypto/armv8: not in enabled drivers build config 00:02:42.874 crypto/bcmfs: not in enabled drivers build config 00:02:42.874 crypto/caam_jr: not in enabled drivers build config 00:02:42.874 crypto/ccp: not in enabled drivers build config 00:02:42.874 crypto/cnxk: not in enabled drivers build config 00:02:42.874 crypto/dpaa_sec: not in enabled drivers build config 00:02:42.874 crypto/dpaa2_sec: not in enabled drivers build config 00:02:42.874 crypto/ipsec_mb: not in enabled drivers build config 00:02:42.874 crypto/mlx5: not in enabled drivers build config 00:02:42.874 crypto/mvsam: not in enabled drivers build config 00:02:42.874 crypto/nitrox: not in enabled drivers build config 00:02:42.874 crypto/null: not in enabled drivers build config 00:02:42.874 crypto/octeontx: not in enabled drivers build config 00:02:42.874 crypto/openssl: not in enabled drivers build config 00:02:42.874 crypto/scheduler: not in enabled drivers build config 00:02:42.874 crypto/uadk: not in enabled drivers build config 00:02:42.874 crypto/virtio: not in enabled drivers build config 00:02:42.874 compress/isal: not in enabled drivers build config 00:02:42.874 compress/mlx5: not in enabled drivers build config 00:02:42.874 compress/nitrox: not in enabled drivers build config 00:02:42.874 compress/octeontx: not in enabled drivers build config 00:02:42.874 compress/zlib: not in enabled drivers build config 00:02:42.874 regex/*: missing internal dependency, "regexdev" 00:02:42.874 ml/*: missing internal dependency, "mldev" 00:02:42.874 vdpa/ifc: not in enabled drivers build config 00:02:42.874 vdpa/mlx5: not in enabled drivers build config 00:02:42.874 vdpa/nfp: not in enabled drivers build config 00:02:42.874 vdpa/sfc: not in enabled drivers build config 00:02:42.874 event/*: missing internal dependency, "eventdev" 00:02:42.874 baseband/*: missing internal dependency, "bbdev" 00:02:42.874 gpu/*: missing internal dependency, "gpudev" 00:02:42.874 00:02:42.874 00:02:42.874 Build targets in project: 84 00:02:42.874 00:02:42.874 DPDK 24.03.0 00:02:42.874 00:02:42.874 User defined options 00:02:42.874 buildtype : debug 00:02:42.874 default_library : shared 00:02:42.874 libdir : lib 00:02:42.874 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:42.874 b_sanitize : address 00:02:42.874 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:42.874 c_link_args : 00:02:42.874 cpu_instruction_set: native 00:02:42.874 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:42.874 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:42.874 enable_docs : false 00:02:42.874 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:42.874 enable_kmods : false 00:02:42.874 max_lcores : 128 00:02:42.874 tests : false 00:02:42.874 00:02:42.874 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.874 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:43.133 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:43.133 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:43.133 [3/267] Linking static target lib/librte_kvargs.a 00:02:43.133 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:43.133 [5/267] Linking static target lib/librte_log.a 00:02:43.133 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:43.133 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:43.391 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:43.391 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:43.391 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.391 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:43.391 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:43.391 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:43.391 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.391 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:43.391 [16/267] Linking static target lib/librte_telemetry.a 00:02:43.391 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:43.391 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:43.649 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:43.907 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.907 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:43.907 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.907 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.907 [24/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.907 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.907 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.907 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.907 [28/267] Linking target lib/librte_log.so.24.1 00:02:43.907 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.165 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.165 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:44.165 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:44.165 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.165 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.165 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.165 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:44.165 [37/267] Linking target lib/librte_telemetry.so.24.1 00:02:44.165 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.166 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.424 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.424 [41/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:44.424 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.424 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.424 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.424 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:44.424 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.424 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.682 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.682 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.682 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.682 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.682 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.682 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.682 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.012 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:45.012 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.012 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.012 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.012 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:45.012 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:45.012 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:45.012 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:45.012 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.279 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:45.279 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.279 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:45.279 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.279 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.537 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:45.537 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.537 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:45.537 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:45.537 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:45.537 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:45.537 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:45.537 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:45.537 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.537 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.796 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.796 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.796 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.796 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.796 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:46.054 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:46.054 [85/267] Linking static target lib/librte_eal.a 00:02:46.054 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:46.054 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:46.054 [88/267] Linking static target lib/librte_ring.a 00:02:46.054 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:46.054 [90/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:46.054 [91/267] Linking static target lib/librte_rcu.a 00:02:46.054 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:46.054 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:46.054 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:46.054 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:46.312 [96/267] Linking static target lib/librte_mempool.a 00:02:46.312 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:46.312 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.312 [99/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.312 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.312 [101/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.570 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:46.570 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:46.570 [104/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:46.570 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:46.570 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:46.570 [107/267] Linking static target lib/librte_net.a 00:02:46.570 [108/267] Linking static target lib/librte_meter.a 00:02:46.828 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:46.828 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.828 [111/267] Linking static target lib/librte_mbuf.a 00:02:46.828 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:47.086 [113/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.086 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:47.086 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:47.086 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.086 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.086 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:47.344 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:47.344 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:47.603 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:47.603 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.603 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.603 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:47.603 [125/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.603 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.603 [127/267] Linking static target lib/librte_pci.a 00:02:47.603 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:47.861 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:47.861 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:47.861 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:47.861 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:47.861 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:47.861 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:47.861 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:48.119 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:48.119 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:48.119 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.119 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:48.119 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:48.119 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:48.119 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:48.119 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:48.119 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:48.119 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:48.119 [146/267] Linking static target lib/librte_cmdline.a 00:02:48.378 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:48.378 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.378 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:48.378 [150/267] Linking static target lib/librte_timer.a 00:02:48.378 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:48.378 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:48.636 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:48.636 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:48.636 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:48.636 [156/267] Linking static target lib/librte_ethdev.a 00:02:48.636 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:48.894 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.894 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:48.894 [160/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:48.894 [161/267] Linking static target lib/librte_compressdev.a 00:02:48.894 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.894 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.894 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:48.894 [165/267] Linking static target lib/librte_dmadev.a 00:02:48.894 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.153 [167/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:49.153 [168/267] Linking static target lib/librte_hash.a 00:02:49.153 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:49.153 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.411 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:49.411 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.411 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.411 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.669 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.669 [176/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.669 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.669 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.669 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.669 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:49.669 [181/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.928 [182/267] Linking static target lib/librte_cryptodev.a 00:02:49.928 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.928 [184/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.928 [185/267] Linking static target lib/librte_power.a 00:02:49.928 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.928 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:49.928 [188/267] Linking static target lib/librte_reorder.a 00:02:49.928 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.186 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.186 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.186 [192/267] Linking static target lib/librte_security.a 00:02:50.445 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.445 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.703 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.703 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.703 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:50.703 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:50.961 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:50.961 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:50.961 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.219 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.219 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.219 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.219 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.219 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.219 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:51.477 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.477 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.477 [210/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.477 [211/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.477 [212/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.477 [213/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.477 [214/267] Linking static target drivers/librte_bus_pci.a 00:02:51.477 [215/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.477 [216/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.477 [217/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.477 [218/267] Linking static target drivers/librte_bus_vdev.a 00:02:51.736 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.736 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.736 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.736 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.736 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.736 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:51.736 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.994 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.252 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.187 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.187 [229/267] Linking target lib/librte_eal.so.24.1 00:02:53.187 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:53.451 [231/267] Linking target lib/librte_pci.so.24.1 00:02:53.451 [232/267] Linking target lib/librte_ring.so.24.1 00:02:53.451 [233/267] Linking target lib/librte_meter.so.24.1 00:02:53.451 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:53.451 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:53.451 [236/267] Linking target lib/librte_timer.so.24.1 00:02:53.451 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:53.451 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:53.451 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:53.451 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:53.451 [241/267] Linking target lib/librte_rcu.so.24.1 00:02:53.451 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:53.451 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:53.451 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:53.451 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:53.710 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:53.710 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:53.710 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:53.710 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:53.710 [250/267] Linking target lib/librte_net.so.24.1 00:02:53.710 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:53.710 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:53.710 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:53.968 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.968 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:53.968 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:53.968 [257/267] Linking target lib/librte_hash.so.24.1 00:02:53.968 [258/267] Linking target lib/librte_security.so.24.1 00:02:53.968 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:53.968 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.227 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:54.227 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:54.227 [263/267] Linking target lib/librte_power.so.24.1 00:02:54.794 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:54.794 [265/267] Linking static target lib/librte_vhost.a 00:02:56.168 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.168 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:56.168 INFO: autodetecting backend as ninja 00:02:56.168 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:11.040 CC lib/ut/ut.o 00:03:11.040 CC lib/ut_mock/mock.o 00:03:11.040 CC lib/log/log.o 00:03:11.040 CC lib/log/log_flags.o 00:03:11.040 CC lib/log/log_deprecated.o 00:03:11.040 LIB libspdk_ut_mock.a 00:03:11.040 LIB libspdk_ut.a 00:03:11.040 SO libspdk_ut_mock.so.6.0 00:03:11.040 SO libspdk_ut.so.2.0 00:03:11.040 LIB libspdk_log.a 00:03:11.040 SYMLINK libspdk_ut_mock.so 00:03:11.040 SO libspdk_log.so.7.1 00:03:11.040 SYMLINK libspdk_ut.so 00:03:11.040 SYMLINK libspdk_log.so 00:03:11.040 CC lib/dma/dma.o 00:03:11.040 CC lib/util/base64.o 00:03:11.040 CC lib/util/bit_array.o 00:03:11.040 CC lib/util/cpuset.o 00:03:11.040 CC lib/util/crc16.o 00:03:11.040 CC lib/util/crc32.o 00:03:11.040 CC lib/util/crc32c.o 00:03:11.040 CC lib/ioat/ioat.o 00:03:11.040 CXX lib/trace_parser/trace.o 00:03:11.040 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.040 CC lib/util/crc32_ieee.o 00:03:11.040 CC lib/util/crc64.o 00:03:11.040 CC lib/util/dif.o 00:03:11.040 CC lib/util/fd.o 00:03:11.040 LIB libspdk_dma.a 00:03:11.040 CC lib/util/fd_group.o 00:03:11.040 CC lib/util/file.o 00:03:11.040 SO libspdk_dma.so.5.0 00:03:11.040 CC lib/util/hexlify.o 00:03:11.040 CC lib/vfio_user/host/vfio_user.o 00:03:11.040 SYMLINK libspdk_dma.so 00:03:11.040 CC lib/util/iov.o 00:03:11.040 CC lib/util/math.o 00:03:11.041 LIB libspdk_ioat.a 00:03:11.041 SO libspdk_ioat.so.7.0 00:03:11.041 CC lib/util/net.o 00:03:11.041 CC lib/util/pipe.o 00:03:11.041 SYMLINK libspdk_ioat.so 00:03:11.041 CC lib/util/strerror_tls.o 00:03:11.041 CC lib/util/string.o 00:03:11.041 CC lib/util/uuid.o 00:03:11.041 LIB libspdk_vfio_user.a 00:03:11.041 CC lib/util/xor.o 00:03:11.041 CC lib/util/zipf.o 00:03:11.041 SO libspdk_vfio_user.so.5.0 00:03:11.041 CC lib/util/md5.o 00:03:11.041 SYMLINK libspdk_vfio_user.so 00:03:11.041 LIB libspdk_util.a 00:03:11.041 SO libspdk_util.so.10.1 00:03:11.041 SYMLINK libspdk_util.so 00:03:11.041 LIB libspdk_trace_parser.a 00:03:11.041 SO libspdk_trace_parser.so.6.0 00:03:11.041 CC lib/json/json_parse.o 00:03:11.041 CC lib/idxd/idxd.o 00:03:11.041 CC lib/idxd/idxd_user.o 00:03:11.041 CC lib/json/json_util.o 00:03:11.041 CC lib/idxd/idxd_kernel.o 00:03:11.041 CC lib/rdma_utils/rdma_utils.o 00:03:11.041 CC lib/conf/conf.o 00:03:11.041 CC lib/vmd/vmd.o 00:03:11.041 SYMLINK libspdk_trace_parser.so 00:03:11.041 CC lib/env_dpdk/env.o 00:03:11.041 CC lib/vmd/led.o 00:03:11.041 CC lib/env_dpdk/memory.o 00:03:11.041 CC lib/env_dpdk/pci.o 00:03:11.041 LIB libspdk_conf.a 00:03:11.041 CC lib/json/json_write.o 00:03:11.041 CC lib/env_dpdk/init.o 00:03:11.041 CC lib/env_dpdk/threads.o 00:03:11.041 SO libspdk_conf.so.6.0 00:03:11.041 LIB libspdk_rdma_utils.a 00:03:11.041 SO libspdk_rdma_utils.so.1.0 00:03:11.041 SYMLINK libspdk_conf.so 00:03:11.041 CC lib/env_dpdk/pci_ioat.o 00:03:11.041 SYMLINK libspdk_rdma_utils.so 00:03:11.041 CC lib/env_dpdk/pci_virtio.o 00:03:11.041 CC lib/env_dpdk/pci_vmd.o 00:03:11.041 CC lib/rdma_provider/common.o 00:03:11.041 CC lib/env_dpdk/pci_idxd.o 00:03:11.041 LIB libspdk_json.a 00:03:11.041 CC lib/env_dpdk/pci_event.o 00:03:11.041 SO libspdk_json.so.6.0 00:03:11.041 CC lib/env_dpdk/sigbus_handler.o 00:03:11.041 CC lib/env_dpdk/pci_dpdk.o 00:03:11.041 SYMLINK libspdk_json.so 00:03:11.041 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:11.041 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:11.041 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:11.041 LIB libspdk_idxd.a 00:03:11.041 SO libspdk_idxd.so.12.1 00:03:11.041 LIB libspdk_vmd.a 00:03:11.041 SO libspdk_vmd.so.6.0 00:03:11.041 SYMLINK libspdk_idxd.so 00:03:11.041 SYMLINK libspdk_vmd.so 00:03:11.041 CC lib/jsonrpc/jsonrpc_server.o 00:03:11.041 CC lib/jsonrpc/jsonrpc_client.o 00:03:11.041 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:11.041 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:11.041 LIB libspdk_rdma_provider.a 00:03:11.041 SO libspdk_rdma_provider.so.7.0 00:03:11.041 SYMLINK libspdk_rdma_provider.so 00:03:11.298 LIB libspdk_jsonrpc.a 00:03:11.298 SO libspdk_jsonrpc.so.6.0 00:03:11.555 SYMLINK libspdk_jsonrpc.so 00:03:11.555 CC lib/rpc/rpc.o 00:03:11.813 LIB libspdk_env_dpdk.a 00:03:11.813 SO libspdk_env_dpdk.so.15.1 00:03:11.813 LIB libspdk_rpc.a 00:03:11.813 SO libspdk_rpc.so.6.0 00:03:11.813 SYMLINK libspdk_rpc.so 00:03:12.071 SYMLINK libspdk_env_dpdk.so 00:03:12.071 CC lib/notify/notify.o 00:03:12.071 CC lib/notify/notify_rpc.o 00:03:12.071 CC lib/trace/trace_flags.o 00:03:12.071 CC lib/trace/trace.o 00:03:12.071 CC lib/keyring/keyring.o 00:03:12.071 CC lib/keyring/keyring_rpc.o 00:03:12.071 CC lib/trace/trace_rpc.o 00:03:12.328 LIB libspdk_notify.a 00:03:12.329 SO libspdk_notify.so.6.0 00:03:12.329 LIB libspdk_keyring.a 00:03:12.329 LIB libspdk_trace.a 00:03:12.329 SYMLINK libspdk_notify.so 00:03:12.329 SO libspdk_keyring.so.2.0 00:03:12.329 SO libspdk_trace.so.11.0 00:03:12.329 SYMLINK libspdk_keyring.so 00:03:12.329 SYMLINK libspdk_trace.so 00:03:12.586 CC lib/sock/sock.o 00:03:12.586 CC lib/sock/sock_rpc.o 00:03:12.586 CC lib/thread/thread.o 00:03:12.586 CC lib/thread/iobuf.o 00:03:13.151 LIB libspdk_sock.a 00:03:13.151 SO libspdk_sock.so.10.0 00:03:13.151 SYMLINK libspdk_sock.so 00:03:13.408 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.408 CC lib/nvme/nvme_fabric.o 00:03:13.408 CC lib/nvme/nvme_ctrlr.o 00:03:13.408 CC lib/nvme/nvme_ns_cmd.o 00:03:13.408 CC lib/nvme/nvme_ns.o 00:03:13.408 CC lib/nvme/nvme_qpair.o 00:03:13.408 CC lib/nvme/nvme_pcie.o 00:03:13.408 CC lib/nvme/nvme_pcie_common.o 00:03:13.408 CC lib/nvme/nvme.o 00:03:13.665 LIB libspdk_thread.a 00:03:13.923 SO libspdk_thread.so.11.0 00:03:13.923 SYMLINK libspdk_thread.so 00:03:13.923 CC lib/nvme/nvme_quirks.o 00:03:13.923 CC lib/nvme/nvme_transport.o 00:03:13.923 CC lib/nvme/nvme_discovery.o 00:03:13.923 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:14.181 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:14.181 CC lib/nvme/nvme_tcp.o 00:03:14.181 CC lib/accel/accel.o 00:03:14.181 CC lib/nvme/nvme_opal.o 00:03:14.439 CC lib/blob/blobstore.o 00:03:14.439 CC lib/init/json_config.o 00:03:14.697 CC lib/virtio/virtio.o 00:03:14.697 CC lib/fsdev/fsdev.o 00:03:14.697 CC lib/fsdev/fsdev_io.o 00:03:14.697 CC lib/virtio/virtio_vhost_user.o 00:03:14.697 CC lib/init/subsystem.o 00:03:14.697 CC lib/virtio/virtio_vfio_user.o 00:03:14.954 CC lib/init/subsystem_rpc.o 00:03:14.954 CC lib/accel/accel_rpc.o 00:03:14.954 CC lib/accel/accel_sw.o 00:03:14.954 CC lib/init/rpc.o 00:03:14.954 CC lib/fsdev/fsdev_rpc.o 00:03:14.954 CC lib/virtio/virtio_pci.o 00:03:14.954 CC lib/blob/request.o 00:03:14.954 CC lib/blob/zeroes.o 00:03:15.212 LIB libspdk_init.a 00:03:15.212 SO libspdk_init.so.6.0 00:03:15.212 SYMLINK libspdk_init.so 00:03:15.212 LIB libspdk_fsdev.a 00:03:15.212 CC lib/blob/blob_bs_dev.o 00:03:15.212 CC lib/nvme/nvme_io_msg.o 00:03:15.212 SO libspdk_fsdev.so.2.0 00:03:15.212 LIB libspdk_accel.a 00:03:15.212 LIB libspdk_virtio.a 00:03:15.212 SO libspdk_accel.so.16.0 00:03:15.212 SYMLINK libspdk_fsdev.so 00:03:15.212 SO libspdk_virtio.so.7.0 00:03:15.470 SYMLINK libspdk_accel.so 00:03:15.470 CC lib/event/app.o 00:03:15.470 CC lib/event/reactor.o 00:03:15.470 SYMLINK libspdk_virtio.so 00:03:15.470 CC lib/event/log_rpc.o 00:03:15.470 CC lib/nvme/nvme_poll_group.o 00:03:15.470 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:15.470 CC lib/nvme/nvme_zns.o 00:03:15.470 CC lib/bdev/bdev.o 00:03:15.470 CC lib/bdev/bdev_rpc.o 00:03:15.470 CC lib/nvme/nvme_stubs.o 00:03:15.728 CC lib/nvme/nvme_auth.o 00:03:15.728 CC lib/bdev/bdev_zone.o 00:03:15.728 CC lib/bdev/part.o 00:03:15.728 CC lib/event/app_rpc.o 00:03:15.985 CC lib/nvme/nvme_cuse.o 00:03:15.985 CC lib/bdev/scsi_nvme.o 00:03:15.985 CC lib/nvme/nvme_rdma.o 00:03:15.985 CC lib/event/scheduler_static.o 00:03:15.985 LIB libspdk_fuse_dispatcher.a 00:03:15.985 SO libspdk_fuse_dispatcher.so.1.0 00:03:15.985 SYMLINK libspdk_fuse_dispatcher.so 00:03:16.242 LIB libspdk_event.a 00:03:16.242 SO libspdk_event.so.14.0 00:03:16.242 SYMLINK libspdk_event.so 00:03:17.175 LIB libspdk_blob.a 00:03:17.175 SO libspdk_blob.so.12.0 00:03:17.175 SYMLINK libspdk_blob.so 00:03:17.175 LIB libspdk_nvme.a 00:03:17.433 CC lib/lvol/lvol.o 00:03:17.433 CC lib/blobfs/tree.o 00:03:17.433 CC lib/blobfs/blobfs.o 00:03:17.433 SO libspdk_nvme.so.15.0 00:03:17.690 SYMLINK libspdk_nvme.so 00:03:17.948 LIB libspdk_blobfs.a 00:03:17.948 SO libspdk_blobfs.so.11.0 00:03:17.948 LIB libspdk_lvol.a 00:03:17.948 SO libspdk_lvol.so.11.0 00:03:18.207 SYMLINK libspdk_blobfs.so 00:03:18.207 SYMLINK libspdk_lvol.so 00:03:18.207 LIB libspdk_bdev.a 00:03:18.465 SO libspdk_bdev.so.17.0 00:03:18.465 SYMLINK libspdk_bdev.so 00:03:18.723 CC lib/ublk/ublk.o 00:03:18.723 CC lib/ublk/ublk_rpc.o 00:03:18.723 CC lib/nbd/nbd.o 00:03:18.723 CC lib/ftl/ftl_core.o 00:03:18.723 CC lib/nbd/nbd_rpc.o 00:03:18.723 CC lib/nvmf/ctrlr_discovery.o 00:03:18.723 CC lib/nvmf/ctrlr.o 00:03:18.723 CC lib/nvmf/ctrlr_bdev.o 00:03:18.723 CC lib/ftl/ftl_init.o 00:03:18.723 CC lib/scsi/dev.o 00:03:18.723 CC lib/scsi/lun.o 00:03:18.723 CC lib/scsi/port.o 00:03:18.723 CC lib/ftl/ftl_layout.o 00:03:18.723 CC lib/ftl/ftl_debug.o 00:03:19.000 CC lib/ftl/ftl_io.o 00:03:19.000 CC lib/ftl/ftl_sb.o 00:03:19.000 CC lib/ftl/ftl_l2p.o 00:03:19.000 LIB libspdk_nbd.a 00:03:19.000 CC lib/ftl/ftl_l2p_flat.o 00:03:19.000 CC lib/scsi/scsi.o 00:03:19.000 SO libspdk_nbd.so.7.0 00:03:19.000 CC lib/ftl/ftl_nv_cache.o 00:03:19.000 SYMLINK libspdk_nbd.so 00:03:19.000 CC lib/ftl/ftl_band.o 00:03:19.000 CC lib/scsi/scsi_bdev.o 00:03:19.258 CC lib/ftl/ftl_band_ops.o 00:03:19.258 CC lib/nvmf/subsystem.o 00:03:19.258 CC lib/ftl/ftl_writer.o 00:03:19.258 CC lib/ftl/ftl_rq.o 00:03:19.258 LIB libspdk_ublk.a 00:03:19.258 CC lib/nvmf/nvmf.o 00:03:19.258 SO libspdk_ublk.so.3.0 00:03:19.258 SYMLINK libspdk_ublk.so 00:03:19.258 CC lib/scsi/scsi_pr.o 00:03:19.258 CC lib/scsi/scsi_rpc.o 00:03:19.258 CC lib/scsi/task.o 00:03:19.516 CC lib/ftl/ftl_reloc.o 00:03:19.516 CC lib/nvmf/nvmf_rpc.o 00:03:19.516 CC lib/nvmf/transport.o 00:03:19.516 CC lib/nvmf/tcp.o 00:03:19.516 CC lib/nvmf/stubs.o 00:03:19.774 LIB libspdk_scsi.a 00:03:19.774 SO libspdk_scsi.so.9.0 00:03:19.774 CC lib/nvmf/mdns_server.o 00:03:19.774 SYMLINK libspdk_scsi.so 00:03:19.774 CC lib/nvmf/rdma.o 00:03:20.032 CC lib/nvmf/auth.o 00:03:20.032 CC lib/ftl/ftl_l2p_cache.o 00:03:20.032 CC lib/ftl/ftl_p2l.o 00:03:20.032 CC lib/ftl/ftl_p2l_log.o 00:03:20.032 CC lib/ftl/mngt/ftl_mngt.o 00:03:20.290 CC lib/iscsi/conn.o 00:03:20.290 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:20.290 CC lib/iscsi/init_grp.o 00:03:20.290 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:20.290 CC lib/vhost/vhost.o 00:03:20.290 CC lib/vhost/vhost_rpc.o 00:03:20.548 CC lib/iscsi/iscsi.o 00:03:20.548 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:20.548 CC lib/vhost/vhost_scsi.o 00:03:20.548 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:20.548 CC lib/vhost/vhost_blk.o 00:03:20.806 CC lib/vhost/rte_vhost_user.o 00:03:20.806 CC lib/iscsi/param.o 00:03:20.806 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:20.806 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:21.064 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:21.064 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:21.064 CC lib/iscsi/portal_grp.o 00:03:21.064 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:21.064 CC lib/iscsi/tgt_node.o 00:03:21.322 CC lib/iscsi/iscsi_subsystem.o 00:03:21.322 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:21.322 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:21.322 CC lib/iscsi/iscsi_rpc.o 00:03:21.322 CC lib/iscsi/task.o 00:03:21.322 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:21.580 CC lib/ftl/utils/ftl_conf.o 00:03:21.580 CC lib/ftl/utils/ftl_md.o 00:03:21.580 LIB libspdk_vhost.a 00:03:21.580 CC lib/ftl/utils/ftl_mempool.o 00:03:21.580 CC lib/ftl/utils/ftl_bitmap.o 00:03:21.580 CC lib/ftl/utils/ftl_property.o 00:03:21.580 LIB libspdk_nvmf.a 00:03:21.580 SO libspdk_vhost.so.8.0 00:03:21.580 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:21.580 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:21.580 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:21.580 LIB libspdk_iscsi.a 00:03:21.580 SO libspdk_nvmf.so.20.0 00:03:21.580 SYMLINK libspdk_vhost.so 00:03:21.838 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:21.838 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:21.838 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:21.838 SO libspdk_iscsi.so.8.0 00:03:21.838 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:21.838 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:21.838 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:21.838 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:21.838 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:21.838 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:21.838 SYMLINK libspdk_nvmf.so 00:03:21.838 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:21.838 SYMLINK libspdk_iscsi.so 00:03:21.838 CC lib/ftl/base/ftl_base_dev.o 00:03:21.838 CC lib/ftl/base/ftl_base_bdev.o 00:03:21.838 CC lib/ftl/ftl_trace.o 00:03:22.096 LIB libspdk_ftl.a 00:03:22.353 SO libspdk_ftl.so.9.0 00:03:22.611 SYMLINK libspdk_ftl.so 00:03:22.869 CC module/env_dpdk/env_dpdk_rpc.o 00:03:22.869 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:22.869 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:22.869 CC module/keyring/file/keyring.o 00:03:22.869 CC module/fsdev/aio/fsdev_aio.o 00:03:22.869 CC module/keyring/linux/keyring.o 00:03:22.869 CC module/sock/posix/posix.o 00:03:22.869 CC module/blob/bdev/blob_bdev.o 00:03:22.869 CC module/scheduler/gscheduler/gscheduler.o 00:03:22.869 CC module/accel/error/accel_error.o 00:03:22.869 LIB libspdk_env_dpdk_rpc.a 00:03:22.869 SO libspdk_env_dpdk_rpc.so.6.0 00:03:23.127 LIB libspdk_scheduler_gscheduler.a 00:03:23.127 SYMLINK libspdk_env_dpdk_rpc.so 00:03:23.127 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:23.127 SO libspdk_scheduler_gscheduler.so.4.0 00:03:23.127 CC module/keyring/linux/keyring_rpc.o 00:03:23.127 CC module/keyring/file/keyring_rpc.o 00:03:23.127 LIB libspdk_scheduler_dpdk_governor.a 00:03:23.127 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:23.127 SYMLINK libspdk_scheduler_gscheduler.so 00:03:23.127 CC module/accel/error/accel_error_rpc.o 00:03:23.127 CC module/fsdev/aio/linux_aio_mgr.o 00:03:23.127 LIB libspdk_scheduler_dynamic.a 00:03:23.127 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:23.127 SO libspdk_scheduler_dynamic.so.4.0 00:03:23.127 LIB libspdk_blob_bdev.a 00:03:23.127 SO libspdk_blob_bdev.so.12.0 00:03:23.127 LIB libspdk_keyring_linux.a 00:03:23.127 LIB libspdk_keyring_file.a 00:03:23.127 SYMLINK libspdk_scheduler_dynamic.so 00:03:23.127 SO libspdk_keyring_linux.so.1.0 00:03:23.127 SO libspdk_keyring_file.so.2.0 00:03:23.127 SYMLINK libspdk_blob_bdev.so 00:03:23.127 LIB libspdk_accel_error.a 00:03:23.127 SYMLINK libspdk_keyring_linux.so 00:03:23.127 SO libspdk_accel_error.so.2.0 00:03:23.127 SYMLINK libspdk_keyring_file.so 00:03:23.387 SYMLINK libspdk_accel_error.so 00:03:23.387 CC module/accel/ioat/accel_ioat.o 00:03:23.387 CC module/accel/ioat/accel_ioat_rpc.o 00:03:23.387 CC module/accel/iaa/accel_iaa.o 00:03:23.387 CC module/accel/dsa/accel_dsa.o 00:03:23.387 CC module/accel/iaa/accel_iaa_rpc.o 00:03:23.387 CC module/bdev/gpt/gpt.o 00:03:23.387 CC module/bdev/error/vbdev_error.o 00:03:23.387 CC module/bdev/delay/vbdev_delay.o 00:03:23.387 CC module/blobfs/bdev/blobfs_bdev.o 00:03:23.387 LIB libspdk_accel_ioat.a 00:03:23.387 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:23.387 SO libspdk_accel_ioat.so.6.0 00:03:23.387 LIB libspdk_accel_iaa.a 00:03:23.645 LIB libspdk_fsdev_aio.a 00:03:23.645 SO libspdk_accel_iaa.so.3.0 00:03:23.645 SYMLINK libspdk_accel_ioat.so 00:03:23.645 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:23.645 SO libspdk_fsdev_aio.so.1.0 00:03:23.645 CC module/bdev/gpt/vbdev_gpt.o 00:03:23.645 CC module/accel/dsa/accel_dsa_rpc.o 00:03:23.645 CC module/bdev/error/vbdev_error_rpc.o 00:03:23.645 SYMLINK libspdk_accel_iaa.so 00:03:23.645 SYMLINK libspdk_fsdev_aio.so 00:03:23.645 LIB libspdk_sock_posix.a 00:03:23.645 SO libspdk_sock_posix.so.6.0 00:03:23.645 LIB libspdk_accel_dsa.a 00:03:23.645 SO libspdk_accel_dsa.so.5.0 00:03:23.645 CC module/bdev/lvol/vbdev_lvol.o 00:03:23.645 LIB libspdk_blobfs_bdev.a 00:03:23.645 CC module/bdev/malloc/bdev_malloc.o 00:03:23.645 LIB libspdk_bdev_delay.a 00:03:23.645 SO libspdk_blobfs_bdev.so.6.0 00:03:23.645 SYMLINK libspdk_sock_posix.so 00:03:23.645 CC module/bdev/null/bdev_null.o 00:03:23.645 CC module/bdev/null/bdev_null_rpc.o 00:03:23.645 CC module/bdev/nvme/bdev_nvme.o 00:03:23.645 SO libspdk_bdev_delay.so.6.0 00:03:23.645 LIB libspdk_bdev_error.a 00:03:23.645 LIB libspdk_bdev_gpt.a 00:03:23.903 SYMLINK libspdk_accel_dsa.so 00:03:23.903 SO libspdk_bdev_error.so.6.0 00:03:23.903 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:23.903 SO libspdk_bdev_gpt.so.6.0 00:03:23.903 SYMLINK libspdk_blobfs_bdev.so 00:03:23.903 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:23.903 SYMLINK libspdk_bdev_delay.so 00:03:23.903 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:23.903 SYMLINK libspdk_bdev_error.so 00:03:23.903 SYMLINK libspdk_bdev_gpt.so 00:03:23.903 CC module/bdev/nvme/nvme_rpc.o 00:03:23.903 LIB libspdk_bdev_null.a 00:03:23.903 CC module/bdev/nvme/bdev_mdns_client.o 00:03:23.903 SO libspdk_bdev_null.so.6.0 00:03:23.903 CC module/bdev/passthru/vbdev_passthru.o 00:03:23.903 CC module/bdev/raid/bdev_raid.o 00:03:23.903 SYMLINK libspdk_bdev_null.so 00:03:23.903 CC module/bdev/raid/bdev_raid_rpc.o 00:03:24.161 CC module/bdev/raid/bdev_raid_sb.o 00:03:24.161 LIB libspdk_bdev_malloc.a 00:03:24.161 CC module/bdev/raid/raid0.o 00:03:24.161 SO libspdk_bdev_malloc.so.6.0 00:03:24.161 CC module/bdev/raid/raid1.o 00:03:24.161 SYMLINK libspdk_bdev_malloc.so 00:03:24.161 CC module/bdev/raid/concat.o 00:03:24.161 LIB libspdk_bdev_lvol.a 00:03:24.161 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:24.161 SO libspdk_bdev_lvol.so.6.0 00:03:24.161 CC module/bdev/nvme/vbdev_opal.o 00:03:24.161 SYMLINK libspdk_bdev_lvol.so 00:03:24.419 LIB libspdk_bdev_passthru.a 00:03:24.419 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:24.419 SO libspdk_bdev_passthru.so.6.0 00:03:24.419 SYMLINK libspdk_bdev_passthru.so 00:03:24.419 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:24.419 CC module/bdev/split/vbdev_split.o 00:03:24.419 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:24.419 CC module/bdev/xnvme/bdev_xnvme.o 00:03:24.419 CC module/bdev/aio/bdev_aio.o 00:03:24.419 CC module/bdev/ftl/bdev_ftl.o 00:03:24.677 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:24.677 CC module/bdev/iscsi/bdev_iscsi.o 00:03:24.677 CC module/bdev/split/vbdev_split_rpc.o 00:03:24.677 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:24.677 LIB libspdk_bdev_split.a 00:03:24.677 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:24.677 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:24.677 SO libspdk_bdev_split.so.6.0 00:03:24.677 CC module/bdev/aio/bdev_aio_rpc.o 00:03:24.677 SYMLINK libspdk_bdev_split.so 00:03:24.677 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:24.677 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:24.936 LIB libspdk_bdev_ftl.a 00:03:24.936 LIB libspdk_bdev_zone_block.a 00:03:24.936 SO libspdk_bdev_zone_block.so.6.0 00:03:24.936 SO libspdk_bdev_ftl.so.6.0 00:03:24.936 LIB libspdk_bdev_aio.a 00:03:24.936 SYMLINK libspdk_bdev_ftl.so 00:03:24.936 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:24.936 SYMLINK libspdk_bdev_zone_block.so 00:03:24.936 SO libspdk_bdev_aio.so.6.0 00:03:24.936 LIB libspdk_bdev_xnvme.a 00:03:24.936 SYMLINK libspdk_bdev_aio.so 00:03:24.936 SO libspdk_bdev_xnvme.so.3.0 00:03:24.936 LIB libspdk_bdev_virtio.a 00:03:24.936 LIB libspdk_bdev_raid.a 00:03:24.936 SYMLINK libspdk_bdev_xnvme.so 00:03:24.936 LIB libspdk_bdev_iscsi.a 00:03:24.936 SO libspdk_bdev_virtio.so.6.0 00:03:24.936 SO libspdk_bdev_raid.so.6.0 00:03:25.194 SO libspdk_bdev_iscsi.so.6.0 00:03:25.194 SYMLINK libspdk_bdev_virtio.so 00:03:25.194 SYMLINK libspdk_bdev_raid.so 00:03:25.194 SYMLINK libspdk_bdev_iscsi.so 00:03:25.760 LIB libspdk_bdev_nvme.a 00:03:26.019 SO libspdk_bdev_nvme.so.7.1 00:03:26.019 SYMLINK libspdk_bdev_nvme.so 00:03:26.277 CC module/event/subsystems/keyring/keyring.o 00:03:26.277 CC module/event/subsystems/sock/sock.o 00:03:26.277 CC module/event/subsystems/scheduler/scheduler.o 00:03:26.277 CC module/event/subsystems/iobuf/iobuf.o 00:03:26.277 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:26.277 CC module/event/subsystems/vmd/vmd.o 00:03:26.277 CC module/event/subsystems/fsdev/fsdev.o 00:03:26.277 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:26.277 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:26.535 LIB libspdk_event_keyring.a 00:03:26.535 LIB libspdk_event_fsdev.a 00:03:26.535 LIB libspdk_event_scheduler.a 00:03:26.535 SO libspdk_event_fsdev.so.1.0 00:03:26.535 SO libspdk_event_keyring.so.1.0 00:03:26.535 LIB libspdk_event_sock.a 00:03:26.535 LIB libspdk_event_vhost_blk.a 00:03:26.535 SO libspdk_event_scheduler.so.4.0 00:03:26.535 LIB libspdk_event_vmd.a 00:03:26.535 LIB libspdk_event_iobuf.a 00:03:26.535 SO libspdk_event_sock.so.5.0 00:03:26.535 SO libspdk_event_vhost_blk.so.3.0 00:03:26.535 SO libspdk_event_vmd.so.6.0 00:03:26.535 SYMLINK libspdk_event_fsdev.so 00:03:26.535 SO libspdk_event_iobuf.so.3.0 00:03:26.535 SYMLINK libspdk_event_keyring.so 00:03:26.535 SYMLINK libspdk_event_scheduler.so 00:03:26.535 SYMLINK libspdk_event_vhost_blk.so 00:03:26.535 SYMLINK libspdk_event_sock.so 00:03:26.535 SYMLINK libspdk_event_vmd.so 00:03:26.535 SYMLINK libspdk_event_iobuf.so 00:03:26.792 CC module/event/subsystems/accel/accel.o 00:03:27.050 LIB libspdk_event_accel.a 00:03:27.050 SO libspdk_event_accel.so.6.0 00:03:27.050 SYMLINK libspdk_event_accel.so 00:03:27.309 CC module/event/subsystems/bdev/bdev.o 00:03:27.309 LIB libspdk_event_bdev.a 00:03:27.309 SO libspdk_event_bdev.so.6.0 00:03:27.309 SYMLINK libspdk_event_bdev.so 00:03:27.634 CC module/event/subsystems/scsi/scsi.o 00:03:27.634 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:27.634 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:27.634 CC module/event/subsystems/ublk/ublk.o 00:03:27.634 CC module/event/subsystems/nbd/nbd.o 00:03:27.634 LIB libspdk_event_nbd.a 00:03:27.634 LIB libspdk_event_ublk.a 00:03:27.634 SO libspdk_event_nbd.so.6.0 00:03:27.634 LIB libspdk_event_scsi.a 00:03:27.634 SO libspdk_event_ublk.so.3.0 00:03:27.634 SO libspdk_event_scsi.so.6.0 00:03:27.634 SYMLINK libspdk_event_nbd.so 00:03:27.634 SYMLINK libspdk_event_ublk.so 00:03:27.896 SYMLINK libspdk_event_scsi.so 00:03:27.896 LIB libspdk_event_nvmf.a 00:03:27.896 SO libspdk_event_nvmf.so.6.0 00:03:27.896 SYMLINK libspdk_event_nvmf.so 00:03:27.896 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:27.896 CC module/event/subsystems/iscsi/iscsi.o 00:03:28.154 LIB libspdk_event_vhost_scsi.a 00:03:28.154 LIB libspdk_event_iscsi.a 00:03:28.154 SO libspdk_event_vhost_scsi.so.3.0 00:03:28.154 SO libspdk_event_iscsi.so.6.0 00:03:28.154 SYMLINK libspdk_event_vhost_scsi.so 00:03:28.154 SYMLINK libspdk_event_iscsi.so 00:03:28.413 SO libspdk.so.6.0 00:03:28.413 SYMLINK libspdk.so 00:03:28.413 CXX app/trace/trace.o 00:03:28.413 CC app/spdk_lspci/spdk_lspci.o 00:03:28.413 CC app/trace_record/trace_record.o 00:03:28.671 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:28.671 CC app/nvmf_tgt/nvmf_main.o 00:03:28.671 CC app/iscsi_tgt/iscsi_tgt.o 00:03:28.671 CC app/spdk_tgt/spdk_tgt.o 00:03:28.671 CC examples/ioat/perf/perf.o 00:03:28.671 CC test/thread/poller_perf/poller_perf.o 00:03:28.671 CC examples/util/zipf/zipf.o 00:03:28.671 LINK spdk_lspci 00:03:28.671 LINK spdk_tgt 00:03:28.671 LINK poller_perf 00:03:28.671 LINK nvmf_tgt 00:03:28.671 LINK interrupt_tgt 00:03:28.671 LINK zipf 00:03:28.671 LINK ioat_perf 00:03:28.671 LINK iscsi_tgt 00:03:28.671 LINK spdk_trace_record 00:03:28.929 LINK spdk_trace 00:03:28.929 CC test/dma/test_dma/test_dma.o 00:03:28.929 CC app/spdk_nvme_perf/perf.o 00:03:28.930 CC app/spdk_nvme_identify/identify.o 00:03:28.930 CC examples/ioat/verify/verify.o 00:03:28.930 CC app/spdk_nvme_discover/discovery_aer.o 00:03:28.930 TEST_HEADER include/spdk/accel.h 00:03:28.930 TEST_HEADER include/spdk/accel_module.h 00:03:28.930 TEST_HEADER include/spdk/assert.h 00:03:28.930 TEST_HEADER include/spdk/barrier.h 00:03:28.930 TEST_HEADER include/spdk/base64.h 00:03:28.930 TEST_HEADER include/spdk/bdev.h 00:03:28.930 TEST_HEADER include/spdk/bdev_module.h 00:03:28.930 TEST_HEADER include/spdk/bdev_zone.h 00:03:28.930 TEST_HEADER include/spdk/bit_array.h 00:03:28.930 TEST_HEADER include/spdk/bit_pool.h 00:03:28.930 CC app/spdk_top/spdk_top.o 00:03:28.930 TEST_HEADER include/spdk/blob_bdev.h 00:03:28.930 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:28.930 TEST_HEADER include/spdk/blobfs.h 00:03:28.930 TEST_HEADER include/spdk/blob.h 00:03:28.930 TEST_HEADER include/spdk/conf.h 00:03:28.930 TEST_HEADER include/spdk/config.h 00:03:28.930 TEST_HEADER include/spdk/cpuset.h 00:03:28.930 TEST_HEADER include/spdk/crc16.h 00:03:28.930 TEST_HEADER include/spdk/crc32.h 00:03:28.930 TEST_HEADER include/spdk/crc64.h 00:03:28.930 TEST_HEADER include/spdk/dif.h 00:03:28.930 CC test/app/bdev_svc/bdev_svc.o 00:03:28.930 TEST_HEADER include/spdk/dma.h 00:03:28.930 TEST_HEADER include/spdk/endian.h 00:03:28.930 TEST_HEADER include/spdk/env_dpdk.h 00:03:28.930 TEST_HEADER include/spdk/env.h 00:03:28.930 TEST_HEADER include/spdk/event.h 00:03:28.930 TEST_HEADER include/spdk/fd_group.h 00:03:28.930 TEST_HEADER include/spdk/fd.h 00:03:28.930 TEST_HEADER include/spdk/file.h 00:03:28.930 TEST_HEADER include/spdk/fsdev.h 00:03:28.930 TEST_HEADER include/spdk/fsdev_module.h 00:03:28.930 TEST_HEADER include/spdk/ftl.h 00:03:28.930 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:28.930 TEST_HEADER include/spdk/gpt_spec.h 00:03:28.930 TEST_HEADER include/spdk/hexlify.h 00:03:28.930 TEST_HEADER include/spdk/histogram_data.h 00:03:28.930 TEST_HEADER include/spdk/idxd.h 00:03:28.930 TEST_HEADER include/spdk/idxd_spec.h 00:03:28.930 TEST_HEADER include/spdk/init.h 00:03:28.930 TEST_HEADER include/spdk/ioat.h 00:03:28.930 TEST_HEADER include/spdk/ioat_spec.h 00:03:28.930 TEST_HEADER include/spdk/iscsi_spec.h 00:03:28.930 TEST_HEADER include/spdk/json.h 00:03:28.930 TEST_HEADER include/spdk/jsonrpc.h 00:03:28.930 TEST_HEADER include/spdk/keyring.h 00:03:28.930 TEST_HEADER include/spdk/keyring_module.h 00:03:28.930 TEST_HEADER include/spdk/likely.h 00:03:28.930 TEST_HEADER include/spdk/log.h 00:03:28.930 TEST_HEADER include/spdk/lvol.h 00:03:28.930 TEST_HEADER include/spdk/md5.h 00:03:28.930 TEST_HEADER include/spdk/memory.h 00:03:28.930 TEST_HEADER include/spdk/mmio.h 00:03:28.930 TEST_HEADER include/spdk/nbd.h 00:03:28.930 TEST_HEADER include/spdk/net.h 00:03:28.930 TEST_HEADER include/spdk/notify.h 00:03:28.930 TEST_HEADER include/spdk/nvme.h 00:03:28.930 TEST_HEADER include/spdk/nvme_intel.h 00:03:28.930 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:28.930 CC examples/thread/thread/thread_ex.o 00:03:28.930 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:28.930 TEST_HEADER include/spdk/nvme_spec.h 00:03:28.930 TEST_HEADER include/spdk/nvme_zns.h 00:03:28.930 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:28.930 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:28.930 TEST_HEADER include/spdk/nvmf.h 00:03:28.930 TEST_HEADER include/spdk/nvmf_spec.h 00:03:28.930 TEST_HEADER include/spdk/nvmf_transport.h 00:03:28.930 TEST_HEADER include/spdk/opal.h 00:03:28.930 TEST_HEADER include/spdk/opal_spec.h 00:03:28.930 TEST_HEADER include/spdk/pci_ids.h 00:03:28.930 TEST_HEADER include/spdk/pipe.h 00:03:29.188 TEST_HEADER include/spdk/queue.h 00:03:29.188 TEST_HEADER include/spdk/reduce.h 00:03:29.188 TEST_HEADER include/spdk/rpc.h 00:03:29.188 TEST_HEADER include/spdk/scheduler.h 00:03:29.188 TEST_HEADER include/spdk/scsi.h 00:03:29.188 TEST_HEADER include/spdk/scsi_spec.h 00:03:29.188 CC app/spdk_dd/spdk_dd.o 00:03:29.188 TEST_HEADER include/spdk/sock.h 00:03:29.188 TEST_HEADER include/spdk/stdinc.h 00:03:29.188 TEST_HEADER include/spdk/string.h 00:03:29.188 TEST_HEADER include/spdk/thread.h 00:03:29.188 TEST_HEADER include/spdk/trace.h 00:03:29.188 TEST_HEADER include/spdk/trace_parser.h 00:03:29.188 TEST_HEADER include/spdk/tree.h 00:03:29.188 TEST_HEADER include/spdk/ublk.h 00:03:29.188 TEST_HEADER include/spdk/util.h 00:03:29.188 TEST_HEADER include/spdk/uuid.h 00:03:29.188 TEST_HEADER include/spdk/version.h 00:03:29.188 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:29.188 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:29.188 TEST_HEADER include/spdk/vhost.h 00:03:29.188 TEST_HEADER include/spdk/vmd.h 00:03:29.188 TEST_HEADER include/spdk/xor.h 00:03:29.188 TEST_HEADER include/spdk/zipf.h 00:03:29.188 CXX test/cpp_headers/accel.o 00:03:29.188 LINK verify 00:03:29.188 LINK spdk_nvme_discover 00:03:29.188 LINK bdev_svc 00:03:29.188 CXX test/cpp_headers/accel_module.o 00:03:29.188 LINK thread 00:03:29.447 LINK test_dma 00:03:29.447 CXX test/cpp_headers/assert.o 00:03:29.447 LINK spdk_dd 00:03:29.447 CC app/fio/nvme/fio_plugin.o 00:03:29.447 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:29.447 CC test/env/mem_callbacks/mem_callbacks.o 00:03:29.447 CXX test/cpp_headers/barrier.o 00:03:29.447 LINK spdk_nvme_identify 00:03:29.705 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:29.705 CC examples/sock/hello_world/hello_sock.o 00:03:29.705 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:29.705 CXX test/cpp_headers/base64.o 00:03:29.705 CXX test/cpp_headers/bdev.o 00:03:29.705 LINK spdk_top 00:03:29.705 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:29.705 LINK spdk_nvme_perf 00:03:29.963 CXX test/cpp_headers/bdev_module.o 00:03:29.963 LINK hello_sock 00:03:29.963 LINK nvme_fuzz 00:03:29.963 CC test/env/vtophys/vtophys.o 00:03:29.963 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:29.963 LINK spdk_nvme 00:03:29.963 LINK mem_callbacks 00:03:29.963 CC test/app/histogram_perf/histogram_perf.o 00:03:29.963 CXX test/cpp_headers/bdev_zone.o 00:03:29.963 LINK vtophys 00:03:29.963 CC test/app/jsoncat/jsoncat.o 00:03:29.963 LINK env_dpdk_post_init 00:03:30.221 LINK histogram_perf 00:03:30.221 CC examples/vmd/lsvmd/lsvmd.o 00:03:30.221 LINK vhost_fuzz 00:03:30.221 CC app/fio/bdev/fio_plugin.o 00:03:30.221 LINK jsoncat 00:03:30.221 CXX test/cpp_headers/bit_array.o 00:03:30.221 CC examples/idxd/perf/perf.o 00:03:30.221 LINK lsvmd 00:03:30.221 CC examples/vmd/led/led.o 00:03:30.221 CC test/env/memory/memory_ut.o 00:03:30.221 CC test/env/pci/pci_ut.o 00:03:30.221 CXX test/cpp_headers/bit_pool.o 00:03:30.480 LINK led 00:03:30.480 CC test/event/event_perf/event_perf.o 00:03:30.480 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:30.480 CXX test/cpp_headers/blob_bdev.o 00:03:30.480 CC app/vhost/vhost.o 00:03:30.480 LINK event_perf 00:03:30.480 LINK idxd_perf 00:03:30.739 LINK spdk_bdev 00:03:30.739 CXX test/cpp_headers/blobfs_bdev.o 00:03:30.739 LINK vhost 00:03:30.739 CC test/nvme/aer/aer.o 00:03:30.739 LINK pci_ut 00:03:30.739 CC test/event/reactor/reactor.o 00:03:30.739 LINK hello_fsdev 00:03:30.739 CC test/rpc_client/rpc_client_test.o 00:03:30.739 CXX test/cpp_headers/blobfs.o 00:03:30.739 LINK reactor 00:03:30.739 CC test/event/reactor_perf/reactor_perf.o 00:03:30.998 LINK iscsi_fuzz 00:03:30.998 CC test/accel/dif/dif.o 00:03:30.998 LINK rpc_client_test 00:03:30.998 LINK aer 00:03:30.998 CXX test/cpp_headers/blob.o 00:03:30.998 LINK reactor_perf 00:03:30.998 CC test/event/app_repeat/app_repeat.o 00:03:30.998 CC examples/accel/perf/accel_perf.o 00:03:30.998 CC test/event/scheduler/scheduler.o 00:03:30.998 CXX test/cpp_headers/conf.o 00:03:30.998 CC test/nvme/reset/reset.o 00:03:30.998 CC test/app/stub/stub.o 00:03:30.998 LINK app_repeat 00:03:30.998 CC test/nvme/sgl/sgl.o 00:03:31.257 CXX test/cpp_headers/config.o 00:03:31.257 CXX test/cpp_headers/cpuset.o 00:03:31.257 CXX test/cpp_headers/crc16.o 00:03:31.257 LINK scheduler 00:03:31.257 CC examples/blob/hello_world/hello_blob.o 00:03:31.257 LINK stub 00:03:31.257 LINK memory_ut 00:03:31.257 LINK reset 00:03:31.257 LINK sgl 00:03:31.257 CXX test/cpp_headers/crc32.o 00:03:31.257 CXX test/cpp_headers/crc64.o 00:03:31.517 CC examples/blob/cli/blobcli.o 00:03:31.517 LINK accel_perf 00:03:31.517 LINK hello_blob 00:03:31.517 CXX test/cpp_headers/dif.o 00:03:31.517 CC examples/nvme/hello_world/hello_world.o 00:03:31.517 CC test/nvme/e2edp/nvme_dp.o 00:03:31.517 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:31.517 CC examples/nvme/arbitration/arbitration.o 00:03:31.517 CC examples/nvme/reconnect/reconnect.o 00:03:31.517 LINK dif 00:03:31.517 CXX test/cpp_headers/dma.o 00:03:31.777 CC examples/nvme/hotplug/hotplug.o 00:03:31.777 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:31.777 LINK nvme_dp 00:03:31.777 LINK hello_world 00:03:31.777 CXX test/cpp_headers/endian.o 00:03:31.777 LINK blobcli 00:03:31.777 LINK cmb_copy 00:03:31.777 LINK arbitration 00:03:31.777 LINK reconnect 00:03:31.777 LINK hotplug 00:03:31.777 CXX test/cpp_headers/env_dpdk.o 00:03:31.777 CC test/blobfs/mkfs/mkfs.o 00:03:32.036 CC examples/nvme/abort/abort.o 00:03:32.036 CC test/nvme/overhead/overhead.o 00:03:32.036 CXX test/cpp_headers/env.o 00:03:32.036 LINK mkfs 00:03:32.036 CC test/nvme/err_injection/err_injection.o 00:03:32.036 LINK nvme_manage 00:03:32.036 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:32.036 CC test/nvme/startup/startup.o 00:03:32.036 CC examples/bdev/hello_world/hello_bdev.o 00:03:32.294 CXX test/cpp_headers/event.o 00:03:32.294 CXX test/cpp_headers/fd_group.o 00:03:32.294 LINK overhead 00:03:32.294 LINK err_injection 00:03:32.294 LINK startup 00:03:32.294 CC test/lvol/esnap/esnap.o 00:03:32.294 LINK pmr_persistence 00:03:32.294 LINK abort 00:03:32.294 LINK hello_bdev 00:03:32.294 CXX test/cpp_headers/fd.o 00:03:32.294 CC test/bdev/bdevio/bdevio.o 00:03:32.294 CC test/nvme/reserve/reserve.o 00:03:32.294 CC test/nvme/simple_copy/simple_copy.o 00:03:32.554 CXX test/cpp_headers/file.o 00:03:32.554 CC test/nvme/connect_stress/connect_stress.o 00:03:32.554 CC test/nvme/boot_partition/boot_partition.o 00:03:32.554 CC examples/bdev/bdevperf/bdevperf.o 00:03:32.554 CC test/nvme/compliance/nvme_compliance.o 00:03:32.554 CXX test/cpp_headers/fsdev.o 00:03:32.554 CXX test/cpp_headers/fsdev_module.o 00:03:32.554 LINK reserve 00:03:32.554 LINK boot_partition 00:03:32.554 LINK simple_copy 00:03:32.554 LINK connect_stress 00:03:32.554 CXX test/cpp_headers/ftl.o 00:03:32.554 CXX test/cpp_headers/fuse_dispatcher.o 00:03:32.812 LINK bdevio 00:03:32.812 CXX test/cpp_headers/gpt_spec.o 00:03:32.812 CC test/nvme/fused_ordering/fused_ordering.o 00:03:32.812 CXX test/cpp_headers/hexlify.o 00:03:32.812 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:32.812 LINK nvme_compliance 00:03:32.812 CC test/nvme/fdp/fdp.o 00:03:32.812 CC test/nvme/cuse/cuse.o 00:03:32.812 CXX test/cpp_headers/histogram_data.o 00:03:32.812 CXX test/cpp_headers/idxd.o 00:03:32.812 LINK fused_ordering 00:03:32.812 CXX test/cpp_headers/idxd_spec.o 00:03:32.812 CXX test/cpp_headers/init.o 00:03:33.069 LINK doorbell_aers 00:03:33.069 CXX test/cpp_headers/ioat.o 00:03:33.069 CXX test/cpp_headers/ioat_spec.o 00:03:33.069 CXX test/cpp_headers/iscsi_spec.o 00:03:33.069 CXX test/cpp_headers/json.o 00:03:33.069 CXX test/cpp_headers/jsonrpc.o 00:03:33.069 CXX test/cpp_headers/keyring.o 00:03:33.069 CXX test/cpp_headers/keyring_module.o 00:03:33.069 CXX test/cpp_headers/likely.o 00:03:33.069 LINK fdp 00:03:33.069 CXX test/cpp_headers/log.o 00:03:33.069 CXX test/cpp_headers/lvol.o 00:03:33.069 CXX test/cpp_headers/md5.o 00:03:33.069 CXX test/cpp_headers/memory.o 00:03:33.069 CXX test/cpp_headers/mmio.o 00:03:33.328 CXX test/cpp_headers/nbd.o 00:03:33.328 CXX test/cpp_headers/net.o 00:03:33.328 CXX test/cpp_headers/notify.o 00:03:33.328 CXX test/cpp_headers/nvme.o 00:03:33.328 LINK bdevperf 00:03:33.328 CXX test/cpp_headers/nvme_intel.o 00:03:33.328 CXX test/cpp_headers/nvme_ocssd.o 00:03:33.328 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.328 CXX test/cpp_headers/nvme_spec.o 00:03:33.328 CXX test/cpp_headers/nvme_zns.o 00:03:33.328 CXX test/cpp_headers/nvmf_cmd.o 00:03:33.328 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.328 CXX test/cpp_headers/nvmf.o 00:03:33.328 CXX test/cpp_headers/nvmf_spec.o 00:03:33.588 CXX test/cpp_headers/nvmf_transport.o 00:03:33.588 CXX test/cpp_headers/opal.o 00:03:33.588 CXX test/cpp_headers/opal_spec.o 00:03:33.588 CXX test/cpp_headers/pci_ids.o 00:03:33.588 CXX test/cpp_headers/pipe.o 00:03:33.588 CXX test/cpp_headers/queue.o 00:03:33.588 CXX test/cpp_headers/reduce.o 00:03:33.588 CXX test/cpp_headers/rpc.o 00:03:33.588 CXX test/cpp_headers/scheduler.o 00:03:33.588 CXX test/cpp_headers/scsi.o 00:03:33.588 CC examples/nvmf/nvmf/nvmf.o 00:03:33.588 CXX test/cpp_headers/scsi_spec.o 00:03:33.588 CXX test/cpp_headers/sock.o 00:03:33.588 CXX test/cpp_headers/stdinc.o 00:03:33.588 CXX test/cpp_headers/string.o 00:03:33.846 CXX test/cpp_headers/thread.o 00:03:33.846 CXX test/cpp_headers/trace.o 00:03:33.846 CXX test/cpp_headers/trace_parser.o 00:03:33.846 CXX test/cpp_headers/tree.o 00:03:33.846 CXX test/cpp_headers/ublk.o 00:03:33.846 CXX test/cpp_headers/util.o 00:03:33.846 CXX test/cpp_headers/uuid.o 00:03:33.846 CXX test/cpp_headers/version.o 00:03:33.846 CXX test/cpp_headers/vfio_user_pci.o 00:03:33.846 CXX test/cpp_headers/vfio_user_spec.o 00:03:33.846 CXX test/cpp_headers/vhost.o 00:03:33.846 LINK nvmf 00:03:33.846 CXX test/cpp_headers/vmd.o 00:03:33.846 CXX test/cpp_headers/xor.o 00:03:33.846 CXX test/cpp_headers/zipf.o 00:03:34.106 LINK cuse 00:03:37.407 LINK esnap 00:03:37.407 00:03:37.407 real 1m4.517s 00:03:37.407 user 6m2.033s 00:03:37.407 sys 1m5.219s 00:03:37.407 ************************************ 00:03:37.407 END TEST make 00:03:37.407 ************************************ 00:03:37.407 13:56:38 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:37.407 13:56:38 make -- common/autotest_common.sh@10 -- $ set +x 00:03:37.407 13:56:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:37.407 13:56:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:37.407 13:56:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:37.407 13:56:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.407 13:56:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:37.407 13:56:38 -- pm/common@44 -- $ pid=5070 00:03:37.407 13:56:38 -- pm/common@50 -- $ kill -TERM 5070 00:03:37.407 13:56:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.407 13:56:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:37.407 13:56:38 -- pm/common@44 -- $ pid=5071 00:03:37.407 13:56:38 -- pm/common@50 -- $ kill -TERM 5071 00:03:37.407 13:56:38 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:37.407 13:56:38 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:37.407 13:56:38 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:37.407 13:56:38 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:37.407 13:56:38 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:37.407 13:56:38 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:37.407 13:56:38 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.407 13:56:38 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.407 13:56:38 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.407 13:56:38 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.407 13:56:38 -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.407 13:56:38 -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.407 13:56:38 -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.407 13:56:38 -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.407 13:56:38 -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.407 13:56:38 -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.407 13:56:38 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.407 13:56:38 -- scripts/common.sh@344 -- # case "$op" in 00:03:37.407 13:56:38 -- scripts/common.sh@345 -- # : 1 00:03:37.407 13:56:38 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.407 13:56:38 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.407 13:56:38 -- scripts/common.sh@365 -- # decimal 1 00:03:37.407 13:56:38 -- scripts/common.sh@353 -- # local d=1 00:03:37.407 13:56:38 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.407 13:56:38 -- scripts/common.sh@355 -- # echo 1 00:03:37.407 13:56:38 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.407 13:56:38 -- scripts/common.sh@366 -- # decimal 2 00:03:37.407 13:56:38 -- scripts/common.sh@353 -- # local d=2 00:03:37.407 13:56:38 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.407 13:56:38 -- scripts/common.sh@355 -- # echo 2 00:03:37.407 13:56:38 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.407 13:56:38 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.407 13:56:38 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.407 13:56:38 -- scripts/common.sh@368 -- # return 0 00:03:37.407 13:56:38 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.407 13:56:38 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.407 --rc genhtml_branch_coverage=1 00:03:37.407 --rc genhtml_function_coverage=1 00:03:37.407 --rc genhtml_legend=1 00:03:37.407 --rc geninfo_all_blocks=1 00:03:37.407 --rc geninfo_unexecuted_blocks=1 00:03:37.407 00:03:37.407 ' 00:03:37.407 13:56:38 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.407 --rc genhtml_branch_coverage=1 00:03:37.407 --rc genhtml_function_coverage=1 00:03:37.407 --rc genhtml_legend=1 00:03:37.407 --rc geninfo_all_blocks=1 00:03:37.407 --rc geninfo_unexecuted_blocks=1 00:03:37.407 00:03:37.407 ' 00:03:37.407 13:56:38 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.407 --rc genhtml_branch_coverage=1 00:03:37.407 --rc genhtml_function_coverage=1 00:03:37.407 --rc genhtml_legend=1 00:03:37.407 --rc geninfo_all_blocks=1 00:03:37.407 --rc geninfo_unexecuted_blocks=1 00:03:37.407 00:03:37.407 ' 00:03:37.407 13:56:38 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.407 --rc genhtml_branch_coverage=1 00:03:37.407 --rc genhtml_function_coverage=1 00:03:37.407 --rc genhtml_legend=1 00:03:37.407 --rc geninfo_all_blocks=1 00:03:37.407 --rc geninfo_unexecuted_blocks=1 00:03:37.407 00:03:37.407 ' 00:03:37.407 13:56:38 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:37.407 13:56:38 -- nvmf/common.sh@7 -- # uname -s 00:03:37.407 13:56:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.407 13:56:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.407 13:56:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.407 13:56:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.407 13:56:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.407 13:56:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.407 13:56:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.407 13:56:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.407 13:56:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.407 13:56:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:37.407 13:56:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9a412fb6-ec4d-4742-888d-917af990c37a 00:03:37.407 13:56:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=9a412fb6-ec4d-4742-888d-917af990c37a 00:03:37.408 13:56:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:37.408 13:56:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:37.408 13:56:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:37.408 13:56:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:37.408 13:56:39 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:37.408 13:56:39 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:37.408 13:56:39 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:37.408 13:56:39 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:37.408 13:56:39 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:37.408 13:56:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.408 13:56:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.408 13:56:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.408 13:56:39 -- paths/export.sh@5 -- # export PATH 00:03:37.408 13:56:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.408 13:56:39 -- nvmf/common.sh@51 -- # : 0 00:03:37.408 13:56:39 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:37.408 13:56:39 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:37.408 13:56:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:37.408 13:56:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:37.408 13:56:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:37.408 13:56:39 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:37.408 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:37.408 13:56:39 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:37.408 13:56:39 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:37.408 13:56:39 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:37.408 13:56:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:37.408 13:56:39 -- spdk/autotest.sh@32 -- # uname -s 00:03:37.408 13:56:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:37.408 13:56:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:37.408 13:56:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:37.408 13:56:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:37.408 13:56:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:37.408 13:56:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:37.408 13:56:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:37.408 13:56:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:37.408 13:56:39 -- spdk/autotest.sh@48 -- # udevadm_pid=54199 00:03:37.408 13:56:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:37.408 13:56:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:37.408 13:56:39 -- pm/common@17 -- # local monitor 00:03:37.408 13:56:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.408 13:56:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.408 13:56:39 -- pm/common@25 -- # sleep 1 00:03:37.408 13:56:39 -- pm/common@21 -- # date +%s 00:03:37.408 13:56:39 -- pm/common@21 -- # date +%s 00:03:37.408 13:56:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733752599 00:03:37.408 13:56:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733752599 00:03:37.408 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733752599_collect-cpu-load.pm.log 00:03:37.408 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733752599_collect-vmstat.pm.log 00:03:38.346 13:56:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:38.346 13:56:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:38.346 13:56:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.346 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:03:38.346 13:56:40 -- spdk/autotest.sh@59 -- # create_test_list 00:03:38.346 13:56:40 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:38.346 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:03:38.346 13:56:40 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:38.346 13:56:40 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:38.346 13:56:40 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:38.346 13:56:40 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:38.346 13:56:40 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:38.346 13:56:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:38.346 13:56:40 -- common/autotest_common.sh@1457 -- # uname 00:03:38.346 13:56:40 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:38.346 13:56:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:38.346 13:56:40 -- common/autotest_common.sh@1477 -- # uname 00:03:38.607 13:56:40 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:38.607 13:56:40 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:38.607 13:56:40 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:38.607 lcov: LCOV version 1.15 00:03:38.607 13:56:40 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:53.524 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:11.641 13:57:10 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:11.641 13:57:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.641 13:57:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.641 13:57:10 -- spdk/autotest.sh@78 -- # rm -f 00:04:11.641 13:57:10 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.641 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:11.641 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:11.641 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:11.641 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:11.641 13:57:11 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:11.641 13:57:11 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:11.641 13:57:11 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:11.641 13:57:11 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:11.641 13:57:11 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:11.641 13:57:11 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:11.641 13:57:11 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:11.641 13:57:11 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:11.641 13:57:11 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:11.641 13:57:11 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:11.641 13:57:11 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:11.641 13:57:11 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:11.641 13:57:11 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.641 13:57:11 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:11.641 13:57:11 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:11.641 13:57:11 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:11.641 13:57:11 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:11.641 13:57:11 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:11.641 13:57:11 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:11.641 13:57:11 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.641 13:57:11 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:11.641 13:57:11 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:04:11.641 13:57:11 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:11.641 13:57:11 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:04:11.641 13:57:11 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:04:11.641 13:57:11 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:11.641 13:57:11 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.641 13:57:11 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:11.641 13:57:11 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:04:11.641 13:57:11 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:11.641 13:57:11 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:04:11.641 13:57:11 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:11.641 13:57:11 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:11.641 13:57:11 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.641 13:57:11 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:11.641 13:57:11 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n2 00:04:11.641 13:57:11 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:04:11.641 13:57:11 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:11.641 13:57:11 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.642 13:57:11 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:11.642 13:57:11 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n3 00:04:11.642 13:57:11 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:04:11.642 13:57:11 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:11.642 13:57:11 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:11.642 13:57:11 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:11.642 13:57:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.642 13:57:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:11.642 13:57:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:11.642 13:57:11 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:11.642 13:57:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:11.642 No valid GPT data, bailing 00:04:11.642 13:57:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:11.642 13:57:11 -- scripts/common.sh@394 -- # pt= 00:04:11.642 13:57:11 -- scripts/common.sh@395 -- # return 1 00:04:11.642 13:57:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:11.642 1+0 records in 00:04:11.642 1+0 records out 00:04:11.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00535486 s, 196 MB/s 00:04:11.642 13:57:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.642 13:57:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:11.642 13:57:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:11.642 13:57:11 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:11.642 13:57:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:11.642 No valid GPT data, bailing 00:04:11.642 13:57:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:11.642 13:57:11 -- scripts/common.sh@394 -- # pt= 00:04:11.642 13:57:11 -- scripts/common.sh@395 -- # return 1 00:04:11.642 13:57:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:11.642 1+0 records in 00:04:11.642 1+0 records out 00:04:11.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264208 s, 39.7 MB/s 00:04:11.642 13:57:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.642 13:57:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:11.642 13:57:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:11.642 13:57:11 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:11.642 13:57:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:11.642 No valid GPT data, bailing 00:04:11.642 13:57:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:11.642 13:57:11 -- scripts/common.sh@394 -- # pt= 00:04:11.642 13:57:11 -- scripts/common.sh@395 -- # return 1 00:04:11.642 13:57:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:11.642 1+0 records in 00:04:11.642 1+0 records out 00:04:11.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005616 s, 187 MB/s 00:04:11.642 13:57:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.642 13:57:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:11.642 13:57:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:11.642 13:57:11 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:11.642 13:57:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:11.642 No valid GPT data, bailing 00:04:11.642 13:57:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:11.642 13:57:11 -- scripts/common.sh@394 -- # pt= 00:04:11.642 13:57:11 -- scripts/common.sh@395 -- # return 1 00:04:11.642 13:57:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:11.642 1+0 records in 00:04:11.642 1+0 records out 00:04:11.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0057273 s, 183 MB/s 00:04:11.642 13:57:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.642 13:57:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:11.642 13:57:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:04:11.642 13:57:11 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:04:11.642 13:57:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:04:11.642 No valid GPT data, bailing 00:04:11.642 13:57:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:04:11.642 13:57:11 -- scripts/common.sh@394 -- # pt= 00:04:11.642 13:57:11 -- scripts/common.sh@395 -- # return 1 00:04:11.642 13:57:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:04:11.642 1+0 records in 00:04:11.642 1+0 records out 00:04:11.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412866 s, 254 MB/s 00:04:11.642 13:57:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.642 13:57:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:11.642 13:57:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:04:11.642 13:57:11 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:04:11.642 13:57:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:04:11.642 No valid GPT data, bailing 00:04:11.642 13:57:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:04:11.642 13:57:12 -- scripts/common.sh@394 -- # pt= 00:04:11.642 13:57:12 -- scripts/common.sh@395 -- # return 1 00:04:11.642 13:57:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:04:11.642 1+0 records in 00:04:11.642 1+0 records out 00:04:11.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416905 s, 252 MB/s 00:04:11.642 13:57:12 -- spdk/autotest.sh@105 -- # sync 00:04:11.642 13:57:12 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:11.642 13:57:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:11.642 13:57:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:11.904 13:57:13 -- spdk/autotest.sh@111 -- # uname -s 00:04:11.904 13:57:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:11.904 13:57:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:11.904 13:57:13 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:12.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.050 Hugepages 00:04:13.050 node hugesize free / total 00:04:13.050 node0 1048576kB 0 / 0 00:04:13.050 node0 2048kB 0 / 0 00:04:13.050 00:04:13.050 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.050 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:13.050 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:13.050 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:13.050 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:04:13.312 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:13.312 13:57:14 -- spdk/autotest.sh@117 -- # uname -s 00:04:13.312 13:57:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:13.312 13:57:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:13.312 13:57:14 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.147 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.147 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.147 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.147 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.407 13:57:15 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:15.349 13:57:16 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:15.349 13:57:16 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:15.349 13:57:16 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:15.349 13:57:16 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:15.349 13:57:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:15.349 13:57:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:15.349 13:57:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.349 13:57:16 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:15.349 13:57:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:15.349 13:57:17 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:15.349 13:57:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:15.349 13:57:17 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.872 Waiting for block devices as requested 00:04:15.873 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:15.873 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:16.135 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:16.135 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.429 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:21.429 13:57:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.429 13:57:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.429 13:57:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.429 13:57:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:21.429 13:57:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:21.429 13:57:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.429 13:57:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.429 13:57:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.429 13:57:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1543 -- # continue 00:04:21.429 13:57:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.429 13:57:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.429 13:57:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.429 13:57:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:21.429 13:57:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:21.429 13:57:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.429 13:57:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.429 13:57:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.429 13:57:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1543 -- # continue 00:04:21.429 13:57:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.429 13:57:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.429 13:57:22 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.429 13:57:22 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.429 13:57:22 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.429 13:57:22 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.429 13:57:22 -- common/autotest_common.sh@1543 -- # continue 00:04:21.429 13:57:22 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.429 13:57:22 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:21.429 13:57:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:21.429 13:57:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:21.429 13:57:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:21.429 13:57:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:21.429 13:57:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:21.429 13:57:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:21.429 13:57:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.429 13:57:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:21.429 13:57:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.429 13:57:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.429 13:57:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.429 13:57:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.429 13:57:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:21.429 13:57:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.429 13:57:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.429 13:57:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.429 13:57:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.429 13:57:23 -- common/autotest_common.sh@1543 -- # continue 00:04:21.429 13:57:23 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:21.429 13:57:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.429 13:57:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.429 13:57:23 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:21.429 13:57:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.429 13:57:23 -- common/autotest_common.sh@10 -- # set +x 00:04:21.429 13:57:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.002 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.575 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.575 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.575 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.575 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.575 13:57:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:22.575 13:57:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.575 13:57:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.575 13:57:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:22.575 13:57:24 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:22.575 13:57:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.575 13:57:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:22.575 13:57:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:22.575 13:57:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:22.575 13:57:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:22.575 13:57:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:22.575 13:57:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:22.575 13:57:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:22.575 13:57:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.575 13:57:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:22.575 13:57:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.575 13:57:24 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:22.575 13:57:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:22.575 13:57:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.575 13:57:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:22.575 13:57:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.575 13:57:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.575 13:57:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.575 13:57:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:22.575 13:57:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.575 13:57:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.575 13:57:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.575 13:57:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:22.575 13:57:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.575 13:57:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.575 13:57:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.575 13:57:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:22.836 13:57:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.836 13:57:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.836 13:57:24 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:22.836 13:57:24 -- common/autotest_common.sh@1572 -- # return 0 00:04:22.836 13:57:24 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:22.836 13:57:24 -- common/autotest_common.sh@1580 -- # return 0 00:04:22.836 13:57:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:22.836 13:57:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:22.836 13:57:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.836 13:57:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.836 13:57:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:22.836 13:57:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.836 13:57:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.836 13:57:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:22.836 13:57:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.836 13:57:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.836 13:57:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.836 13:57:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.836 ************************************ 00:04:22.836 START TEST env 00:04:22.836 ************************************ 00:04:22.836 13:57:24 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.837 * Looking for test storage... 00:04:22.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:22.837 13:57:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.837 13:57:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.837 13:57:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.837 13:57:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.837 13:57:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.837 13:57:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.837 13:57:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.837 13:57:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.837 13:57:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.837 13:57:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.837 13:57:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.837 13:57:24 env -- scripts/common.sh@344 -- # case "$op" in 00:04:22.837 13:57:24 env -- scripts/common.sh@345 -- # : 1 00:04:22.837 13:57:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.837 13:57:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.837 13:57:24 env -- scripts/common.sh@365 -- # decimal 1 00:04:22.837 13:57:24 env -- scripts/common.sh@353 -- # local d=1 00:04:22.837 13:57:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.837 13:57:24 env -- scripts/common.sh@355 -- # echo 1 00:04:22.837 13:57:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.837 13:57:24 env -- scripts/common.sh@366 -- # decimal 2 00:04:22.837 13:57:24 env -- scripts/common.sh@353 -- # local d=2 00:04:22.837 13:57:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.837 13:57:24 env -- scripts/common.sh@355 -- # echo 2 00:04:22.837 13:57:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.837 13:57:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.837 13:57:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.837 13:57:24 env -- scripts/common.sh@368 -- # return 0 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:22.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.837 --rc genhtml_branch_coverage=1 00:04:22.837 --rc genhtml_function_coverage=1 00:04:22.837 --rc genhtml_legend=1 00:04:22.837 --rc geninfo_all_blocks=1 00:04:22.837 --rc geninfo_unexecuted_blocks=1 00:04:22.837 00:04:22.837 ' 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:22.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.837 --rc genhtml_branch_coverage=1 00:04:22.837 --rc genhtml_function_coverage=1 00:04:22.837 --rc genhtml_legend=1 00:04:22.837 --rc geninfo_all_blocks=1 00:04:22.837 --rc geninfo_unexecuted_blocks=1 00:04:22.837 00:04:22.837 ' 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:22.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.837 --rc genhtml_branch_coverage=1 00:04:22.837 --rc genhtml_function_coverage=1 00:04:22.837 --rc genhtml_legend=1 00:04:22.837 --rc geninfo_all_blocks=1 00:04:22.837 --rc geninfo_unexecuted_blocks=1 00:04:22.837 00:04:22.837 ' 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:22.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.837 --rc genhtml_branch_coverage=1 00:04:22.837 --rc genhtml_function_coverage=1 00:04:22.837 --rc genhtml_legend=1 00:04:22.837 --rc geninfo_all_blocks=1 00:04:22.837 --rc geninfo_unexecuted_blocks=1 00:04:22.837 00:04:22.837 ' 00:04:22.837 13:57:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.837 13:57:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.837 13:57:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.837 ************************************ 00:04:22.837 START TEST env_memory 00:04:22.837 ************************************ 00:04:22.837 13:57:24 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.837 00:04:22.837 00:04:22.837 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.837 http://cunit.sourceforge.net/ 00:04:22.837 00:04:22.837 00:04:22.837 Suite: memory 00:04:22.837 Test: alloc and free memory map ...[2024-12-09 13:57:24.614492] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.099 passed 00:04:23.099 Test: mem map translation ...[2024-12-09 13:57:24.653325] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.099 [2024-12-09 13:57:24.653374] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.099 [2024-12-09 13:57:24.653434] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.099 [2024-12-09 13:57:24.653449] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.099 passed 00:04:23.099 Test: mem map registration ...[2024-12-09 13:57:24.721589] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:23.099 [2024-12-09 13:57:24.721633] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:23.099 passed 00:04:23.099 Test: mem map adjacent registrations ...passed 00:04:23.099 00:04:23.099 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.099 suites 1 1 n/a 0 0 00:04:23.099 tests 4 4 4 0 0 00:04:23.099 asserts 152 152 152 0 n/a 00:04:23.099 00:04:23.099 Elapsed time = 0.233 seconds 00:04:23.099 00:04:23.099 real 0m0.270s 00:04:23.099 user 0m0.245s 00:04:23.099 sys 0m0.016s 00:04:23.099 13:57:24 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.099 13:57:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.099 ************************************ 00:04:23.099 END TEST env_memory 00:04:23.099 ************************************ 00:04:23.099 13:57:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.099 13:57:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.099 13:57:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.099 13:57:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.359 ************************************ 00:04:23.359 START TEST env_vtophys 00:04:23.359 ************************************ 00:04:23.359 13:57:24 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.359 EAL: lib.eal log level changed from notice to debug 00:04:23.359 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.359 EAL: Detected lcore 1 as core 0 on socket 0 00:04:23.359 EAL: Detected lcore 2 as core 0 on socket 0 00:04:23.359 EAL: Detected lcore 3 as core 0 on socket 0 00:04:23.359 EAL: Detected lcore 4 as core 0 on socket 0 00:04:23.359 EAL: Detected lcore 5 as core 0 on socket 0 00:04:23.359 EAL: Detected lcore 6 as core 0 on socket 0 00:04:23.359 EAL: Detected lcore 7 as core 0 on socket 0 00:04:23.359 EAL: Detected lcore 8 as core 0 on socket 0 00:04:23.359 EAL: Detected lcore 9 as core 0 on socket 0 00:04:23.359 EAL: Maximum logical cores by configuration: 128 00:04:23.359 EAL: Detected CPU lcores: 10 00:04:23.359 EAL: Detected NUMA nodes: 1 00:04:23.359 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.359 EAL: Detected shared linkage of DPDK 00:04:23.359 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.359 EAL: Selected IOVA mode 'PA' 00:04:23.359 EAL: Probing VFIO support... 00:04:23.359 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.359 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:23.359 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.359 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.359 EAL: Setting up physically contiguous memory... 00:04:23.359 EAL: Setting maximum number of open files to 524288 00:04:23.359 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.359 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.359 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.359 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.359 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.359 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.359 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.359 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.359 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.359 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.359 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.359 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.359 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.359 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.359 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.359 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.359 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.359 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.359 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.359 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.359 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.359 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.359 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.359 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.359 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.359 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.359 EAL: Hugepages will be freed exactly as allocated. 00:04:23.359 EAL: No shared files mode enabled, IPC is disabled 00:04:23.359 EAL: No shared files mode enabled, IPC is disabled 00:04:23.359 EAL: TSC frequency is ~2600000 KHz 00:04:23.359 EAL: Main lcore 0 is ready (tid=7f70e725fa40;cpuset=[0]) 00:04:23.359 EAL: Trying to obtain current memory policy. 00:04:23.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.359 EAL: Restoring previous memory policy: 0 00:04:23.359 EAL: request: mp_malloc_sync 00:04:23.359 EAL: No shared files mode enabled, IPC is disabled 00:04:23.359 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.359 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.359 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.359 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.359 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:23.359 00:04:23.359 00:04:23.359 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.359 http://cunit.sourceforge.net/ 00:04:23.359 00:04:23.359 00:04:23.359 Suite: components_suite 00:04:23.929 Test: vtophys_malloc_test ...passed 00:04:23.929 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:23.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.929 EAL: Restoring previous memory policy: 4 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was expanded by 4MB 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was shrunk by 4MB 00:04:23.929 EAL: Trying to obtain current memory policy. 00:04:23.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.929 EAL: Restoring previous memory policy: 4 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was expanded by 6MB 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was shrunk by 6MB 00:04:23.929 EAL: Trying to obtain current memory policy. 00:04:23.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.929 EAL: Restoring previous memory policy: 4 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was expanded by 10MB 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was shrunk by 10MB 00:04:23.929 EAL: Trying to obtain current memory policy. 00:04:23.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.929 EAL: Restoring previous memory policy: 4 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was expanded by 18MB 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was shrunk by 18MB 00:04:23.929 EAL: Trying to obtain current memory policy. 00:04:23.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.929 EAL: Restoring previous memory policy: 4 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was expanded by 34MB 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was shrunk by 34MB 00:04:23.929 EAL: Trying to obtain current memory policy. 00:04:23.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.929 EAL: Restoring previous memory policy: 4 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.929 EAL: request: mp_malloc_sync 00:04:23.929 EAL: No shared files mode enabled, IPC is disabled 00:04:23.929 EAL: Heap on socket 0 was expanded by 66MB 00:04:23.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.188 EAL: request: mp_malloc_sync 00:04:24.188 EAL: No shared files mode enabled, IPC is disabled 00:04:24.188 EAL: Heap on socket 0 was shrunk by 66MB 00:04:24.188 EAL: Trying to obtain current memory policy. 00:04:24.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.188 EAL: Restoring previous memory policy: 4 00:04:24.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.188 EAL: request: mp_malloc_sync 00:04:24.188 EAL: No shared files mode enabled, IPC is disabled 00:04:24.188 EAL: Heap on socket 0 was expanded by 130MB 00:04:24.188 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.448 EAL: request: mp_malloc_sync 00:04:24.448 EAL: No shared files mode enabled, IPC is disabled 00:04:24.448 EAL: Heap on socket 0 was shrunk by 130MB 00:04:24.448 EAL: Trying to obtain current memory policy. 00:04:24.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.448 EAL: Restoring previous memory policy: 4 00:04:24.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.448 EAL: request: mp_malloc_sync 00:04:24.448 EAL: No shared files mode enabled, IPC is disabled 00:04:24.448 EAL: Heap on socket 0 was expanded by 258MB 00:04:24.708 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.969 EAL: request: mp_malloc_sync 00:04:24.969 EAL: No shared files mode enabled, IPC is disabled 00:04:24.969 EAL: Heap on socket 0 was shrunk by 258MB 00:04:25.230 EAL: Trying to obtain current memory policy. 00:04:25.230 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.230 EAL: Restoring previous memory policy: 4 00:04:25.230 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.230 EAL: request: mp_malloc_sync 00:04:25.230 EAL: No shared files mode enabled, IPC is disabled 00:04:25.230 EAL: Heap on socket 0 was expanded by 514MB 00:04:25.800 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.058 EAL: request: mp_malloc_sync 00:04:26.058 EAL: No shared files mode enabled, IPC is disabled 00:04:26.058 EAL: Heap on socket 0 was shrunk by 514MB 00:04:26.316 EAL: Trying to obtain current memory policy. 00:04:26.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.573 EAL: Restoring previous memory policy: 4 00:04:26.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.573 EAL: request: mp_malloc_sync 00:04:26.573 EAL: No shared files mode enabled, IPC is disabled 00:04:26.573 EAL: Heap on socket 0 was expanded by 1026MB 00:04:27.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.505 EAL: request: mp_malloc_sync 00:04:27.505 EAL: No shared files mode enabled, IPC is disabled 00:04:27.505 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:28.450 passed 00:04:28.450 00:04:28.450 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.450 suites 1 1 n/a 0 0 00:04:28.450 tests 2 2 2 0 0 00:04:28.450 asserts 5796 5796 5796 0 n/a 00:04:28.450 00:04:28.450 Elapsed time = 4.822 seconds 00:04:28.450 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.450 EAL: request: mp_malloc_sync 00:04:28.450 EAL: No shared files mode enabled, IPC is disabled 00:04:28.450 EAL: Heap on socket 0 was shrunk by 2MB 00:04:28.450 EAL: No shared files mode enabled, IPC is disabled 00:04:28.450 EAL: No shared files mode enabled, IPC is disabled 00:04:28.450 EAL: No shared files mode enabled, IPC is disabled 00:04:28.450 ************************************ 00:04:28.450 END TEST env_vtophys 00:04:28.450 ************************************ 00:04:28.450 00:04:28.450 real 0m5.106s 00:04:28.450 user 0m4.164s 00:04:28.450 sys 0m0.794s 00:04:28.450 13:57:29 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.450 13:57:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:28.450 13:57:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:28.450 13:57:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.450 13:57:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.450 13:57:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.450 ************************************ 00:04:28.450 START TEST env_pci 00:04:28.450 ************************************ 00:04:28.450 13:57:30 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:28.450 00:04:28.450 00:04:28.450 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.450 http://cunit.sourceforge.net/ 00:04:28.450 00:04:28.450 00:04:28.450 Suite: pci 00:04:28.450 Test: pci_hook ...[2024-12-09 13:57:30.089348] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56991 has claimed it 00:04:28.450 passed 00:04:28.450 00:04:28.450 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.450 suites 1 1 n/a 0 0 00:04:28.450 tests 1 1 1 0 0 00:04:28.450 asserts 25 25 25 0 n/a 00:04:28.450 00:04:28.450 Elapsed time = 0.007 seconds 00:04:28.450 EAL: Cannot find device (10000:00:01.0) 00:04:28.450 EAL: Failed to attach device on primary process 00:04:28.450 00:04:28.450 real 0m0.071s 00:04:28.450 user 0m0.037s 00:04:28.450 sys 0m0.033s 00:04:28.450 13:57:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.450 13:57:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:28.450 ************************************ 00:04:28.450 END TEST env_pci 00:04:28.450 ************************************ 00:04:28.450 13:57:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:28.450 13:57:30 env -- env/env.sh@15 -- # uname 00:04:28.450 13:57:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:28.450 13:57:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:28.450 13:57:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.450 13:57:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:28.450 13:57:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.450 13:57:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.450 ************************************ 00:04:28.450 START TEST env_dpdk_post_init 00:04:28.450 ************************************ 00:04:28.450 13:57:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.450 EAL: Detected CPU lcores: 10 00:04:28.450 EAL: Detected NUMA nodes: 1 00:04:28.450 EAL: Detected shared linkage of DPDK 00:04:28.711 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.711 EAL: Selected IOVA mode 'PA' 00:04:28.711 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.711 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:28.711 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:28.711 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:28.711 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:28.711 Starting DPDK initialization... 00:04:28.711 Starting SPDK post initialization... 00:04:28.711 SPDK NVMe probe 00:04:28.711 Attaching to 0000:00:10.0 00:04:28.711 Attaching to 0000:00:11.0 00:04:28.711 Attaching to 0000:00:12.0 00:04:28.711 Attaching to 0000:00:13.0 00:04:28.711 Attached to 0000:00:10.0 00:04:28.711 Attached to 0000:00:11.0 00:04:28.711 Attached to 0000:00:13.0 00:04:28.711 Attached to 0000:00:12.0 00:04:28.711 Cleaning up... 00:04:28.711 00:04:28.711 real 0m0.233s 00:04:28.711 user 0m0.072s 00:04:28.711 sys 0m0.064s 00:04:28.711 ************************************ 00:04:28.711 END TEST env_dpdk_post_init 00:04:28.711 ************************************ 00:04:28.711 13:57:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.711 13:57:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.711 13:57:30 env -- env/env.sh@26 -- # uname 00:04:28.711 13:57:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:28.711 13:57:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.711 13:57:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.711 13:57:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.711 13:57:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.711 ************************************ 00:04:28.711 START TEST env_mem_callbacks 00:04:28.711 ************************************ 00:04:28.711 13:57:30 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.974 EAL: Detected CPU lcores: 10 00:04:28.974 EAL: Detected NUMA nodes: 1 00:04:28.974 EAL: Detected shared linkage of DPDK 00:04:28.974 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.974 EAL: Selected IOVA mode 'PA' 00:04:28.974 00:04:28.974 00:04:28.974 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.974 http://cunit.sourceforge.net/ 00:04:28.974 00:04:28.974 00:04:28.974 Suite: memory 00:04:28.974 Test: test ... 00:04:28.974 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.974 register 0x200000200000 2097152 00:04:28.974 malloc 3145728 00:04:28.974 register 0x200000400000 4194304 00:04:28.974 buf 0x2000004fffc0 len 3145728 PASSED 00:04:28.974 malloc 64 00:04:28.974 buf 0x2000004ffec0 len 64 PASSED 00:04:28.974 malloc 4194304 00:04:28.974 register 0x200000800000 6291456 00:04:28.974 buf 0x2000009fffc0 len 4194304 PASSED 00:04:28.974 free 0x2000004fffc0 3145728 00:04:28.974 free 0x2000004ffec0 64 00:04:28.974 unregister 0x200000400000 4194304 PASSED 00:04:28.974 free 0x2000009fffc0 4194304 00:04:28.974 unregister 0x200000800000 6291456 PASSED 00:04:28.974 malloc 8388608 00:04:28.974 register 0x200000400000 10485760 00:04:28.974 buf 0x2000005fffc0 len 8388608 PASSED 00:04:28.974 free 0x2000005fffc0 8388608 00:04:28.974 unregister 0x200000400000 10485760 PASSED 00:04:28.974 passed 00:04:28.974 00:04:28.974 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.974 suites 1 1 n/a 0 0 00:04:28.974 tests 1 1 1 0 0 00:04:28.974 asserts 15 15 15 0 n/a 00:04:28.974 00:04:28.974 Elapsed time = 0.040 seconds 00:04:28.974 00:04:28.974 real 0m0.210s 00:04:28.974 user 0m0.056s 00:04:28.974 sys 0m0.052s 00:04:28.974 13:57:30 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.974 ************************************ 00:04:28.974 13:57:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:28.974 END TEST env_mem_callbacks 00:04:28.974 ************************************ 00:04:28.974 00:04:28.974 real 0m6.360s 00:04:28.974 user 0m4.740s 00:04:28.974 sys 0m1.163s 00:04:28.974 13:57:30 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.974 ************************************ 00:04:28.974 END TEST env 00:04:28.974 ************************************ 00:04:28.974 13:57:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.235 13:57:30 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:29.235 13:57:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.236 13:57:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.236 13:57:30 -- common/autotest_common.sh@10 -- # set +x 00:04:29.236 ************************************ 00:04:29.236 START TEST rpc 00:04:29.236 ************************************ 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:29.236 * Looking for test storage... 00:04:29.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.236 13:57:30 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.236 13:57:30 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.236 13:57:30 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.236 13:57:30 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.236 13:57:30 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.236 13:57:30 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.236 13:57:30 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.236 13:57:30 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.236 13:57:30 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.236 13:57:30 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.236 13:57:30 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.236 13:57:30 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.236 13:57:30 rpc -- scripts/common.sh@345 -- # : 1 00:04:29.236 13:57:30 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.236 13:57:30 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.236 13:57:30 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.236 13:57:30 rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.236 13:57:30 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.236 13:57:30 rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.236 13:57:30 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.236 13:57:30 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.236 13:57:30 rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.236 13:57:30 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.236 13:57:30 rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.236 13:57:30 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.236 13:57:30 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.236 13:57:30 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.236 13:57:30 rpc -- scripts/common.sh@368 -- # return 0 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.236 --rc genhtml_branch_coverage=1 00:04:29.236 --rc genhtml_function_coverage=1 00:04:29.236 --rc genhtml_legend=1 00:04:29.236 --rc geninfo_all_blocks=1 00:04:29.236 --rc geninfo_unexecuted_blocks=1 00:04:29.236 00:04:29.236 ' 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.236 --rc genhtml_branch_coverage=1 00:04:29.236 --rc genhtml_function_coverage=1 00:04:29.236 --rc genhtml_legend=1 00:04:29.236 --rc geninfo_all_blocks=1 00:04:29.236 --rc geninfo_unexecuted_blocks=1 00:04:29.236 00:04:29.236 ' 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.236 --rc genhtml_branch_coverage=1 00:04:29.236 --rc genhtml_function_coverage=1 00:04:29.236 --rc genhtml_legend=1 00:04:29.236 --rc geninfo_all_blocks=1 00:04:29.236 --rc geninfo_unexecuted_blocks=1 00:04:29.236 00:04:29.236 ' 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.236 --rc genhtml_branch_coverage=1 00:04:29.236 --rc genhtml_function_coverage=1 00:04:29.236 --rc genhtml_legend=1 00:04:29.236 --rc geninfo_all_blocks=1 00:04:29.236 --rc geninfo_unexecuted_blocks=1 00:04:29.236 00:04:29.236 ' 00:04:29.236 13:57:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57118 00:04:29.236 13:57:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.236 13:57:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57118 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@835 -- # '[' -z 57118 ']' 00:04:29.236 13:57:30 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.236 13:57:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.497 [2024-12-09 13:57:31.035128] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:29.497 [2024-12-09 13:57:31.035481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57118 ] 00:04:29.497 [2024-12-09 13:57:31.199532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.759 [2024-12-09 13:57:31.313421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:29.759 [2024-12-09 13:57:31.313491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57118' to capture a snapshot of events at runtime. 00:04:29.759 [2024-12-09 13:57:31.313502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:29.759 [2024-12-09 13:57:31.313514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:29.759 [2024-12-09 13:57:31.313522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57118 for offline analysis/debug. 00:04:29.759 [2024-12-09 13:57:31.314449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.332 13:57:32 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.332 13:57:32 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.332 13:57:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.332 13:57:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.332 13:57:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:30.332 13:57:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:30.332 13:57:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.332 13:57:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.332 13:57:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.332 ************************************ 00:04:30.332 START TEST rpc_integrity 00:04:30.332 ************************************ 00:04:30.332 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:30.332 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.332 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.332 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.332 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.332 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.332 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.332 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.332 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.332 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.332 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.332 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.332 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:30.332 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.332 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.332 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.592 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.592 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.592 { 00:04:30.592 "name": "Malloc0", 00:04:30.592 "aliases": [ 00:04:30.592 "8937c81b-11f9-4f45-b160-d6ff44b42a2e" 00:04:30.592 ], 00:04:30.592 "product_name": "Malloc disk", 00:04:30.592 "block_size": 512, 00:04:30.592 "num_blocks": 16384, 00:04:30.592 "uuid": "8937c81b-11f9-4f45-b160-d6ff44b42a2e", 00:04:30.592 "assigned_rate_limits": { 00:04:30.592 "rw_ios_per_sec": 0, 00:04:30.592 "rw_mbytes_per_sec": 0, 00:04:30.592 "r_mbytes_per_sec": 0, 00:04:30.592 "w_mbytes_per_sec": 0 00:04:30.592 }, 00:04:30.592 "claimed": false, 00:04:30.592 "zoned": false, 00:04:30.592 "supported_io_types": { 00:04:30.592 "read": true, 00:04:30.592 "write": true, 00:04:30.592 "unmap": true, 00:04:30.592 "flush": true, 00:04:30.592 "reset": true, 00:04:30.592 "nvme_admin": false, 00:04:30.592 "nvme_io": false, 00:04:30.592 "nvme_io_md": false, 00:04:30.592 "write_zeroes": true, 00:04:30.592 "zcopy": true, 00:04:30.592 "get_zone_info": false, 00:04:30.592 "zone_management": false, 00:04:30.592 "zone_append": false, 00:04:30.592 "compare": false, 00:04:30.592 "compare_and_write": false, 00:04:30.592 "abort": true, 00:04:30.592 "seek_hole": false, 00:04:30.592 "seek_data": false, 00:04:30.592 "copy": true, 00:04:30.592 "nvme_iov_md": false 00:04:30.592 }, 00:04:30.592 "memory_domains": [ 00:04:30.592 { 00:04:30.592 "dma_device_id": "system", 00:04:30.592 "dma_device_type": 1 00:04:30.592 }, 00:04:30.592 { 00:04:30.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.592 "dma_device_type": 2 00:04:30.592 } 00:04:30.592 ], 00:04:30.592 "driver_specific": {} 00:04:30.592 } 00:04:30.592 ]' 00:04:30.592 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.592 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.592 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:30.592 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.592 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.592 [2024-12-09 13:57:32.167790] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:30.592 [2024-12-09 13:57:32.167870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.592 [2024-12-09 13:57:32.167900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:30.592 [2024-12-09 13:57:32.167914] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.592 [2024-12-09 13:57:32.170499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.592 [2024-12-09 13:57:32.170581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.592 Passthru0 00:04:30.592 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.592 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.592 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.592 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.592 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.592 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.592 { 00:04:30.592 "name": "Malloc0", 00:04:30.592 "aliases": [ 00:04:30.592 "8937c81b-11f9-4f45-b160-d6ff44b42a2e" 00:04:30.592 ], 00:04:30.592 "product_name": "Malloc disk", 00:04:30.592 "block_size": 512, 00:04:30.592 "num_blocks": 16384, 00:04:30.592 "uuid": "8937c81b-11f9-4f45-b160-d6ff44b42a2e", 00:04:30.592 "assigned_rate_limits": { 00:04:30.592 "rw_ios_per_sec": 0, 00:04:30.592 "rw_mbytes_per_sec": 0, 00:04:30.592 "r_mbytes_per_sec": 0, 00:04:30.592 "w_mbytes_per_sec": 0 00:04:30.592 }, 00:04:30.592 "claimed": true, 00:04:30.592 "claim_type": "exclusive_write", 00:04:30.592 "zoned": false, 00:04:30.592 "supported_io_types": { 00:04:30.592 "read": true, 00:04:30.592 "write": true, 00:04:30.592 "unmap": true, 00:04:30.592 "flush": true, 00:04:30.592 "reset": true, 00:04:30.592 "nvme_admin": false, 00:04:30.592 "nvme_io": false, 00:04:30.592 "nvme_io_md": false, 00:04:30.592 "write_zeroes": true, 00:04:30.592 "zcopy": true, 00:04:30.592 "get_zone_info": false, 00:04:30.592 "zone_management": false, 00:04:30.592 "zone_append": false, 00:04:30.592 "compare": false, 00:04:30.592 "compare_and_write": false, 00:04:30.592 "abort": true, 00:04:30.592 "seek_hole": false, 00:04:30.592 "seek_data": false, 00:04:30.592 "copy": true, 00:04:30.592 "nvme_iov_md": false 00:04:30.592 }, 00:04:30.592 "memory_domains": [ 00:04:30.592 { 00:04:30.592 "dma_device_id": "system", 00:04:30.592 "dma_device_type": 1 00:04:30.592 }, 00:04:30.592 { 00:04:30.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.592 "dma_device_type": 2 00:04:30.592 } 00:04:30.592 ], 00:04:30.592 "driver_specific": {} 00:04:30.592 }, 00:04:30.592 { 00:04:30.592 "name": "Passthru0", 00:04:30.592 "aliases": [ 00:04:30.592 "fce7e43b-91ff-58b8-bf05-738dbf3e9249" 00:04:30.592 ], 00:04:30.592 "product_name": "passthru", 00:04:30.592 "block_size": 512, 00:04:30.592 "num_blocks": 16384, 00:04:30.592 "uuid": "fce7e43b-91ff-58b8-bf05-738dbf3e9249", 00:04:30.592 "assigned_rate_limits": { 00:04:30.592 "rw_ios_per_sec": 0, 00:04:30.592 "rw_mbytes_per_sec": 0, 00:04:30.592 "r_mbytes_per_sec": 0, 00:04:30.592 "w_mbytes_per_sec": 0 00:04:30.592 }, 00:04:30.592 "claimed": false, 00:04:30.592 "zoned": false, 00:04:30.592 "supported_io_types": { 00:04:30.592 "read": true, 00:04:30.592 "write": true, 00:04:30.592 "unmap": true, 00:04:30.592 "flush": true, 00:04:30.592 "reset": true, 00:04:30.592 "nvme_admin": false, 00:04:30.592 "nvme_io": false, 00:04:30.592 "nvme_io_md": false, 00:04:30.592 "write_zeroes": true, 00:04:30.592 "zcopy": true, 00:04:30.592 "get_zone_info": false, 00:04:30.592 "zone_management": false, 00:04:30.592 "zone_append": false, 00:04:30.592 "compare": false, 00:04:30.592 "compare_and_write": false, 00:04:30.592 "abort": true, 00:04:30.592 "seek_hole": false, 00:04:30.592 "seek_data": false, 00:04:30.592 "copy": true, 00:04:30.592 "nvme_iov_md": false 00:04:30.592 }, 00:04:30.592 "memory_domains": [ 00:04:30.592 { 00:04:30.592 "dma_device_id": "system", 00:04:30.592 "dma_device_type": 1 00:04:30.592 }, 00:04:30.592 { 00:04:30.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.592 "dma_device_type": 2 00:04:30.592 } 00:04:30.592 ], 00:04:30.592 "driver_specific": { 00:04:30.592 "passthru": { 00:04:30.593 "name": "Passthru0", 00:04:30.593 "base_bdev_name": "Malloc0" 00:04:30.593 } 00:04:30.593 } 00:04:30.593 } 00:04:30.593 ]' 00:04:30.593 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.593 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.593 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.593 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.593 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.593 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.593 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.593 ************************************ 00:04:30.593 END TEST rpc_integrity 00:04:30.593 ************************************ 00:04:30.593 13:57:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.593 00:04:30.593 real 0m0.262s 00:04:30.593 user 0m0.134s 00:04:30.593 sys 0m0.037s 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.593 13:57:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.593 13:57:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:30.593 13:57:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.593 13:57:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.593 13:57:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.593 ************************************ 00:04:30.593 START TEST rpc_plugins 00:04:30.593 ************************************ 00:04:30.593 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:30.593 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:30.593 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.593 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.593 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.593 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:30.593 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:30.593 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.593 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.855 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.855 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:30.855 { 00:04:30.855 "name": "Malloc1", 00:04:30.855 "aliases": [ 00:04:30.855 "230ca12c-a5fd-4d31-9274-b0f2de9fad98" 00:04:30.855 ], 00:04:30.855 "product_name": "Malloc disk", 00:04:30.855 "block_size": 4096, 00:04:30.855 "num_blocks": 256, 00:04:30.855 "uuid": "230ca12c-a5fd-4d31-9274-b0f2de9fad98", 00:04:30.855 "assigned_rate_limits": { 00:04:30.855 "rw_ios_per_sec": 0, 00:04:30.855 "rw_mbytes_per_sec": 0, 00:04:30.855 "r_mbytes_per_sec": 0, 00:04:30.855 "w_mbytes_per_sec": 0 00:04:30.855 }, 00:04:30.855 "claimed": false, 00:04:30.855 "zoned": false, 00:04:30.855 "supported_io_types": { 00:04:30.855 "read": true, 00:04:30.855 "write": true, 00:04:30.855 "unmap": true, 00:04:30.855 "flush": true, 00:04:30.855 "reset": true, 00:04:30.855 "nvme_admin": false, 00:04:30.855 "nvme_io": false, 00:04:30.855 "nvme_io_md": false, 00:04:30.855 "write_zeroes": true, 00:04:30.855 "zcopy": true, 00:04:30.855 "get_zone_info": false, 00:04:30.855 "zone_management": false, 00:04:30.855 "zone_append": false, 00:04:30.855 "compare": false, 00:04:30.855 "compare_and_write": false, 00:04:30.855 "abort": true, 00:04:30.855 "seek_hole": false, 00:04:30.855 "seek_data": false, 00:04:30.855 "copy": true, 00:04:30.855 "nvme_iov_md": false 00:04:30.855 }, 00:04:30.855 "memory_domains": [ 00:04:30.855 { 00:04:30.855 "dma_device_id": "system", 00:04:30.855 "dma_device_type": 1 00:04:30.855 }, 00:04:30.855 { 00:04:30.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.855 "dma_device_type": 2 00:04:30.855 } 00:04:30.855 ], 00:04:30.855 "driver_specific": {} 00:04:30.855 } 00:04:30.855 ]' 00:04:30.855 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:30.855 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:30.855 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:30.855 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.855 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.855 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.855 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:30.855 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.855 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.855 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.855 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:30.855 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:30.855 ************************************ 00:04:30.855 END TEST rpc_plugins 00:04:30.855 ************************************ 00:04:30.855 13:57:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:30.855 00:04:30.855 real 0m0.123s 00:04:30.855 user 0m0.063s 00:04:30.855 sys 0m0.018s 00:04:30.855 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.855 13:57:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.855 13:57:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:30.855 13:57:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.855 13:57:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.855 13:57:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.855 ************************************ 00:04:30.855 START TEST rpc_trace_cmd_test 00:04:30.855 ************************************ 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:30.855 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57118", 00:04:30.855 "tpoint_group_mask": "0x8", 00:04:30.855 "iscsi_conn": { 00:04:30.855 "mask": "0x2", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "scsi": { 00:04:30.855 "mask": "0x4", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "bdev": { 00:04:30.855 "mask": "0x8", 00:04:30.855 "tpoint_mask": "0xffffffffffffffff" 00:04:30.855 }, 00:04:30.855 "nvmf_rdma": { 00:04:30.855 "mask": "0x10", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "nvmf_tcp": { 00:04:30.855 "mask": "0x20", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "ftl": { 00:04:30.855 "mask": "0x40", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "blobfs": { 00:04:30.855 "mask": "0x80", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "dsa": { 00:04:30.855 "mask": "0x200", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "thread": { 00:04:30.855 "mask": "0x400", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "nvme_pcie": { 00:04:30.855 "mask": "0x800", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "iaa": { 00:04:30.855 "mask": "0x1000", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "nvme_tcp": { 00:04:30.855 "mask": "0x2000", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "bdev_nvme": { 00:04:30.855 "mask": "0x4000", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "sock": { 00:04:30.855 "mask": "0x8000", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "blob": { 00:04:30.855 "mask": "0x10000", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "bdev_raid": { 00:04:30.855 "mask": "0x20000", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 }, 00:04:30.855 "scheduler": { 00:04:30.855 "mask": "0x40000", 00:04:30.855 "tpoint_mask": "0x0" 00:04:30.855 } 00:04:30.855 }' 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:30.855 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:31.116 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:31.117 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:31.117 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:31.117 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:31.117 ************************************ 00:04:31.117 END TEST rpc_trace_cmd_test 00:04:31.117 ************************************ 00:04:31.117 13:57:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:31.117 00:04:31.117 real 0m0.161s 00:04:31.117 user 0m0.129s 00:04:31.117 sys 0m0.023s 00:04:31.117 13:57:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.117 13:57:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:31.117 13:57:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:31.117 13:57:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:31.117 13:57:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:31.117 13:57:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.117 13:57:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.117 13:57:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.117 ************************************ 00:04:31.117 START TEST rpc_daemon_integrity 00:04:31.117 ************************************ 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.117 { 00:04:31.117 "name": "Malloc2", 00:04:31.117 "aliases": [ 00:04:31.117 "cf4bb4bf-44cc-47d1-a1b9-04a368b55430" 00:04:31.117 ], 00:04:31.117 "product_name": "Malloc disk", 00:04:31.117 "block_size": 512, 00:04:31.117 "num_blocks": 16384, 00:04:31.117 "uuid": "cf4bb4bf-44cc-47d1-a1b9-04a368b55430", 00:04:31.117 "assigned_rate_limits": { 00:04:31.117 "rw_ios_per_sec": 0, 00:04:31.117 "rw_mbytes_per_sec": 0, 00:04:31.117 "r_mbytes_per_sec": 0, 00:04:31.117 "w_mbytes_per_sec": 0 00:04:31.117 }, 00:04:31.117 "claimed": false, 00:04:31.117 "zoned": false, 00:04:31.117 "supported_io_types": { 00:04:31.117 "read": true, 00:04:31.117 "write": true, 00:04:31.117 "unmap": true, 00:04:31.117 "flush": true, 00:04:31.117 "reset": true, 00:04:31.117 "nvme_admin": false, 00:04:31.117 "nvme_io": false, 00:04:31.117 "nvme_io_md": false, 00:04:31.117 "write_zeroes": true, 00:04:31.117 "zcopy": true, 00:04:31.117 "get_zone_info": false, 00:04:31.117 "zone_management": false, 00:04:31.117 "zone_append": false, 00:04:31.117 "compare": false, 00:04:31.117 "compare_and_write": false, 00:04:31.117 "abort": true, 00:04:31.117 "seek_hole": false, 00:04:31.117 "seek_data": false, 00:04:31.117 "copy": true, 00:04:31.117 "nvme_iov_md": false 00:04:31.117 }, 00:04:31.117 "memory_domains": [ 00:04:31.117 { 00:04:31.117 "dma_device_id": "system", 00:04:31.117 "dma_device_type": 1 00:04:31.117 }, 00:04:31.117 { 00:04:31.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.117 "dma_device_type": 2 00:04:31.117 } 00:04:31.117 ], 00:04:31.117 "driver_specific": {} 00:04:31.117 } 00:04:31.117 ]' 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.117 [2024-12-09 13:57:32.883278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:31.117 [2024-12-09 13:57:32.883503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.117 [2024-12-09 13:57:32.883549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:31.117 [2024-12-09 13:57:32.883565] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.117 [2024-12-09 13:57:32.886072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.117 [2024-12-09 13:57:32.886130] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.117 Passthru0 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.117 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.379 { 00:04:31.379 "name": "Malloc2", 00:04:31.379 "aliases": [ 00:04:31.379 "cf4bb4bf-44cc-47d1-a1b9-04a368b55430" 00:04:31.379 ], 00:04:31.379 "product_name": "Malloc disk", 00:04:31.379 "block_size": 512, 00:04:31.379 "num_blocks": 16384, 00:04:31.379 "uuid": "cf4bb4bf-44cc-47d1-a1b9-04a368b55430", 00:04:31.379 "assigned_rate_limits": { 00:04:31.379 "rw_ios_per_sec": 0, 00:04:31.379 "rw_mbytes_per_sec": 0, 00:04:31.379 "r_mbytes_per_sec": 0, 00:04:31.379 "w_mbytes_per_sec": 0 00:04:31.379 }, 00:04:31.379 "claimed": true, 00:04:31.379 "claim_type": "exclusive_write", 00:04:31.379 "zoned": false, 00:04:31.379 "supported_io_types": { 00:04:31.379 "read": true, 00:04:31.379 "write": true, 00:04:31.379 "unmap": true, 00:04:31.379 "flush": true, 00:04:31.379 "reset": true, 00:04:31.379 "nvme_admin": false, 00:04:31.379 "nvme_io": false, 00:04:31.379 "nvme_io_md": false, 00:04:31.379 "write_zeroes": true, 00:04:31.379 "zcopy": true, 00:04:31.379 "get_zone_info": false, 00:04:31.379 "zone_management": false, 00:04:31.379 "zone_append": false, 00:04:31.379 "compare": false, 00:04:31.379 "compare_and_write": false, 00:04:31.379 "abort": true, 00:04:31.379 "seek_hole": false, 00:04:31.379 "seek_data": false, 00:04:31.379 "copy": true, 00:04:31.379 "nvme_iov_md": false 00:04:31.379 }, 00:04:31.379 "memory_domains": [ 00:04:31.379 { 00:04:31.379 "dma_device_id": "system", 00:04:31.379 "dma_device_type": 1 00:04:31.379 }, 00:04:31.379 { 00:04:31.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.379 "dma_device_type": 2 00:04:31.379 } 00:04:31.379 ], 00:04:31.379 "driver_specific": {} 00:04:31.379 }, 00:04:31.379 { 00:04:31.379 "name": "Passthru0", 00:04:31.379 "aliases": [ 00:04:31.379 "2a7ed8c5-3e47-5c1a-af91-a824ad2af28e" 00:04:31.379 ], 00:04:31.379 "product_name": "passthru", 00:04:31.379 "block_size": 512, 00:04:31.379 "num_blocks": 16384, 00:04:31.379 "uuid": "2a7ed8c5-3e47-5c1a-af91-a824ad2af28e", 00:04:31.379 "assigned_rate_limits": { 00:04:31.379 "rw_ios_per_sec": 0, 00:04:31.379 "rw_mbytes_per_sec": 0, 00:04:31.379 "r_mbytes_per_sec": 0, 00:04:31.379 "w_mbytes_per_sec": 0 00:04:31.379 }, 00:04:31.379 "claimed": false, 00:04:31.379 "zoned": false, 00:04:31.379 "supported_io_types": { 00:04:31.379 "read": true, 00:04:31.379 "write": true, 00:04:31.379 "unmap": true, 00:04:31.379 "flush": true, 00:04:31.379 "reset": true, 00:04:31.379 "nvme_admin": false, 00:04:31.379 "nvme_io": false, 00:04:31.379 "nvme_io_md": false, 00:04:31.379 "write_zeroes": true, 00:04:31.379 "zcopy": true, 00:04:31.379 "get_zone_info": false, 00:04:31.379 "zone_management": false, 00:04:31.379 "zone_append": false, 00:04:31.379 "compare": false, 00:04:31.379 "compare_and_write": false, 00:04:31.379 "abort": true, 00:04:31.379 "seek_hole": false, 00:04:31.379 "seek_data": false, 00:04:31.379 "copy": true, 00:04:31.379 "nvme_iov_md": false 00:04:31.379 }, 00:04:31.379 "memory_domains": [ 00:04:31.379 { 00:04:31.379 "dma_device_id": "system", 00:04:31.379 "dma_device_type": 1 00:04:31.379 }, 00:04:31.379 { 00:04:31.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.379 "dma_device_type": 2 00:04:31.379 } 00:04:31.379 ], 00:04:31.379 "driver_specific": { 00:04:31.379 "passthru": { 00:04:31.379 "name": "Passthru0", 00:04:31.379 "base_bdev_name": "Malloc2" 00:04:31.379 } 00:04:31.379 } 00:04:31.379 } 00:04:31.379 ]' 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.379 13:57:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.379 ************************************ 00:04:31.379 END TEST rpc_daemon_integrity 00:04:31.379 ************************************ 00:04:31.379 13:57:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.379 00:04:31.379 real 0m0.252s 00:04:31.379 user 0m0.136s 00:04:31.379 sys 0m0.028s 00:04:31.379 13:57:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.379 13:57:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.379 13:57:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:31.379 13:57:33 rpc -- rpc/rpc.sh@84 -- # killprocess 57118 00:04:31.379 13:57:33 rpc -- common/autotest_common.sh@954 -- # '[' -z 57118 ']' 00:04:31.379 13:57:33 rpc -- common/autotest_common.sh@958 -- # kill -0 57118 00:04:31.379 13:57:33 rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.379 13:57:33 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.379 13:57:33 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57118 00:04:31.379 killing process with pid 57118 00:04:31.379 13:57:33 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.379 13:57:33 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.379 13:57:33 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57118' 00:04:31.380 13:57:33 rpc -- common/autotest_common.sh@973 -- # kill 57118 00:04:31.380 13:57:33 rpc -- common/autotest_common.sh@978 -- # wait 57118 00:04:33.300 00:04:33.300 real 0m3.802s 00:04:33.300 user 0m4.134s 00:04:33.300 sys 0m0.733s 00:04:33.300 13:57:34 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.300 13:57:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.300 ************************************ 00:04:33.300 END TEST rpc 00:04:33.300 ************************************ 00:04:33.300 13:57:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:33.300 13:57:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.300 13:57:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.300 13:57:34 -- common/autotest_common.sh@10 -- # set +x 00:04:33.300 ************************************ 00:04:33.300 START TEST skip_rpc 00:04:33.300 ************************************ 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:33.300 * Looking for test storage... 00:04:33.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.300 13:57:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:33.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.300 --rc genhtml_branch_coverage=1 00:04:33.300 --rc genhtml_function_coverage=1 00:04:33.300 --rc genhtml_legend=1 00:04:33.300 --rc geninfo_all_blocks=1 00:04:33.300 --rc geninfo_unexecuted_blocks=1 00:04:33.300 00:04:33.300 ' 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:33.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.300 --rc genhtml_branch_coverage=1 00:04:33.300 --rc genhtml_function_coverage=1 00:04:33.300 --rc genhtml_legend=1 00:04:33.300 --rc geninfo_all_blocks=1 00:04:33.300 --rc geninfo_unexecuted_blocks=1 00:04:33.300 00:04:33.300 ' 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:33.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.300 --rc genhtml_branch_coverage=1 00:04:33.300 --rc genhtml_function_coverage=1 00:04:33.300 --rc genhtml_legend=1 00:04:33.300 --rc geninfo_all_blocks=1 00:04:33.300 --rc geninfo_unexecuted_blocks=1 00:04:33.300 00:04:33.300 ' 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:33.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.300 --rc genhtml_branch_coverage=1 00:04:33.300 --rc genhtml_function_coverage=1 00:04:33.300 --rc genhtml_legend=1 00:04:33.300 --rc geninfo_all_blocks=1 00:04:33.300 --rc geninfo_unexecuted_blocks=1 00:04:33.300 00:04:33.300 ' 00:04:33.300 13:57:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.300 13:57:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:33.300 13:57:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.300 13:57:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.300 ************************************ 00:04:33.300 START TEST skip_rpc 00:04:33.300 ************************************ 00:04:33.300 13:57:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:33.300 13:57:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57335 00:04:33.300 13:57:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.300 13:57:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:33.300 13:57:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:33.300 [2024-12-09 13:57:34.923839] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:33.300 [2024-12-09 13:57:34.924090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57335 ] 00:04:33.301 [2024-12-09 13:57:35.080192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.563 [2024-12-09 13:57:35.207378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57335 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57335 ']' 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57335 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57335 00:04:38.860 killing process with pid 57335 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57335' 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57335 00:04:38.860 13:57:39 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57335 00:04:39.427 00:04:39.427 real 0m6.225s 00:04:39.427 user 0m5.744s 00:04:39.427 sys 0m0.373s 00:04:39.427 13:57:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.427 ************************************ 00:04:39.427 END TEST skip_rpc 00:04:39.427 ************************************ 00:04:39.427 13:57:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.427 13:57:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:39.427 13:57:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.427 13:57:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.427 13:57:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.427 ************************************ 00:04:39.427 START TEST skip_rpc_with_json 00:04:39.427 ************************************ 00:04:39.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57429 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57429 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57429 ']' 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.427 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.427 [2024-12-09 13:57:41.204484] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:39.427 [2024-12-09 13:57:41.204610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57429 ] 00:04:39.686 [2024-12-09 13:57:41.356422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.686 [2024-12-09 13:57:41.435635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.251 [2024-12-09 13:57:41.989193] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:40.251 request: 00:04:40.251 { 00:04:40.251 "trtype": "tcp", 00:04:40.251 "method": "nvmf_get_transports", 00:04:40.251 "req_id": 1 00:04:40.251 } 00:04:40.251 Got JSON-RPC error response 00:04:40.251 response: 00:04:40.251 { 00:04:40.251 "code": -19, 00:04:40.251 "message": "No such device" 00:04:40.251 } 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.251 [2024-12-09 13:57:41.997286] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.251 13:57:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:40.251 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.251 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.510 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.510 13:57:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.510 { 00:04:40.510 "subsystems": [ 00:04:40.510 { 00:04:40.510 "subsystem": "fsdev", 00:04:40.510 "config": [ 00:04:40.510 { 00:04:40.510 "method": "fsdev_set_opts", 00:04:40.510 "params": { 00:04:40.510 "fsdev_io_pool_size": 65535, 00:04:40.510 "fsdev_io_cache_size": 256 00:04:40.510 } 00:04:40.510 } 00:04:40.510 ] 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "subsystem": "keyring", 00:04:40.510 "config": [] 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "subsystem": "iobuf", 00:04:40.510 "config": [ 00:04:40.510 { 00:04:40.510 "method": "iobuf_set_options", 00:04:40.510 "params": { 00:04:40.510 "small_pool_count": 8192, 00:04:40.510 "large_pool_count": 1024, 00:04:40.510 "small_bufsize": 8192, 00:04:40.510 "large_bufsize": 135168, 00:04:40.510 "enable_numa": false 00:04:40.510 } 00:04:40.510 } 00:04:40.510 ] 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "subsystem": "sock", 00:04:40.510 "config": [ 00:04:40.510 { 00:04:40.510 "method": "sock_set_default_impl", 00:04:40.510 "params": { 00:04:40.510 "impl_name": "posix" 00:04:40.510 } 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "method": "sock_impl_set_options", 00:04:40.510 "params": { 00:04:40.510 "impl_name": "ssl", 00:04:40.510 "recv_buf_size": 4096, 00:04:40.510 "send_buf_size": 4096, 00:04:40.510 "enable_recv_pipe": true, 00:04:40.510 "enable_quickack": false, 00:04:40.510 "enable_placement_id": 0, 00:04:40.510 "enable_zerocopy_send_server": true, 00:04:40.510 "enable_zerocopy_send_client": false, 00:04:40.510 "zerocopy_threshold": 0, 00:04:40.510 "tls_version": 0, 00:04:40.510 "enable_ktls": false 00:04:40.510 } 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "method": "sock_impl_set_options", 00:04:40.510 "params": { 00:04:40.510 "impl_name": "posix", 00:04:40.510 "recv_buf_size": 2097152, 00:04:40.510 "send_buf_size": 2097152, 00:04:40.510 "enable_recv_pipe": true, 00:04:40.510 "enable_quickack": false, 00:04:40.510 "enable_placement_id": 0, 00:04:40.510 "enable_zerocopy_send_server": true, 00:04:40.510 "enable_zerocopy_send_client": false, 00:04:40.510 "zerocopy_threshold": 0, 00:04:40.510 "tls_version": 0, 00:04:40.510 "enable_ktls": false 00:04:40.510 } 00:04:40.510 } 00:04:40.510 ] 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "subsystem": "vmd", 00:04:40.510 "config": [] 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "subsystem": "accel", 00:04:40.510 "config": [ 00:04:40.510 { 00:04:40.510 "method": "accel_set_options", 00:04:40.510 "params": { 00:04:40.510 "small_cache_size": 128, 00:04:40.510 "large_cache_size": 16, 00:04:40.510 "task_count": 2048, 00:04:40.510 "sequence_count": 2048, 00:04:40.510 "buf_count": 2048 00:04:40.510 } 00:04:40.510 } 00:04:40.510 ] 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "subsystem": "bdev", 00:04:40.510 "config": [ 00:04:40.510 { 00:04:40.510 "method": "bdev_set_options", 00:04:40.510 "params": { 00:04:40.510 "bdev_io_pool_size": 65535, 00:04:40.510 "bdev_io_cache_size": 256, 00:04:40.510 "bdev_auto_examine": true, 00:04:40.510 "iobuf_small_cache_size": 128, 00:04:40.510 "iobuf_large_cache_size": 16 00:04:40.510 } 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "method": "bdev_raid_set_options", 00:04:40.510 "params": { 00:04:40.510 "process_window_size_kb": 1024, 00:04:40.510 "process_max_bandwidth_mb_sec": 0 00:04:40.510 } 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "method": "bdev_iscsi_set_options", 00:04:40.510 "params": { 00:04:40.510 "timeout_sec": 30 00:04:40.510 } 00:04:40.510 }, 00:04:40.510 { 00:04:40.510 "method": "bdev_nvme_set_options", 00:04:40.510 "params": { 00:04:40.510 "action_on_timeout": "none", 00:04:40.510 "timeout_us": 0, 00:04:40.510 "timeout_admin_us": 0, 00:04:40.510 "keep_alive_timeout_ms": 10000, 00:04:40.510 "arbitration_burst": 0, 00:04:40.510 "low_priority_weight": 0, 00:04:40.510 "medium_priority_weight": 0, 00:04:40.510 "high_priority_weight": 0, 00:04:40.510 "nvme_adminq_poll_period_us": 10000, 00:04:40.510 "nvme_ioq_poll_period_us": 0, 00:04:40.510 "io_queue_requests": 0, 00:04:40.510 "delay_cmd_submit": true, 00:04:40.510 "transport_retry_count": 4, 00:04:40.510 "bdev_retry_count": 3, 00:04:40.511 "transport_ack_timeout": 0, 00:04:40.511 "ctrlr_loss_timeout_sec": 0, 00:04:40.511 "reconnect_delay_sec": 0, 00:04:40.511 "fast_io_fail_timeout_sec": 0, 00:04:40.511 "disable_auto_failback": false, 00:04:40.511 "generate_uuids": false, 00:04:40.511 "transport_tos": 0, 00:04:40.511 "nvme_error_stat": false, 00:04:40.511 "rdma_srq_size": 0, 00:04:40.511 "io_path_stat": false, 00:04:40.511 "allow_accel_sequence": false, 00:04:40.511 "rdma_max_cq_size": 0, 00:04:40.511 "rdma_cm_event_timeout_ms": 0, 00:04:40.511 "dhchap_digests": [ 00:04:40.511 "sha256", 00:04:40.511 "sha384", 00:04:40.511 "sha512" 00:04:40.511 ], 00:04:40.511 "dhchap_dhgroups": [ 00:04:40.511 "null", 00:04:40.511 "ffdhe2048", 00:04:40.511 "ffdhe3072", 00:04:40.511 "ffdhe4096", 00:04:40.511 "ffdhe6144", 00:04:40.511 "ffdhe8192" 00:04:40.511 ] 00:04:40.511 } 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "method": "bdev_nvme_set_hotplug", 00:04:40.511 "params": { 00:04:40.511 "period_us": 100000, 00:04:40.511 "enable": false 00:04:40.511 } 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "method": "bdev_wait_for_examine" 00:04:40.511 } 00:04:40.511 ] 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "subsystem": "scsi", 00:04:40.511 "config": null 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "subsystem": "scheduler", 00:04:40.511 "config": [ 00:04:40.511 { 00:04:40.511 "method": "framework_set_scheduler", 00:04:40.511 "params": { 00:04:40.511 "name": "static" 00:04:40.511 } 00:04:40.511 } 00:04:40.511 ] 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "subsystem": "vhost_scsi", 00:04:40.511 "config": [] 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "subsystem": "vhost_blk", 00:04:40.511 "config": [] 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "subsystem": "ublk", 00:04:40.511 "config": [] 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "subsystem": "nbd", 00:04:40.511 "config": [] 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "subsystem": "nvmf", 00:04:40.511 "config": [ 00:04:40.511 { 00:04:40.511 "method": "nvmf_set_config", 00:04:40.511 "params": { 00:04:40.511 "discovery_filter": "match_any", 00:04:40.511 "admin_cmd_passthru": { 00:04:40.511 "identify_ctrlr": false 00:04:40.511 }, 00:04:40.511 "dhchap_digests": [ 00:04:40.511 "sha256", 00:04:40.511 "sha384", 00:04:40.511 "sha512" 00:04:40.511 ], 00:04:40.511 "dhchap_dhgroups": [ 00:04:40.511 "null", 00:04:40.511 "ffdhe2048", 00:04:40.511 "ffdhe3072", 00:04:40.511 "ffdhe4096", 00:04:40.511 "ffdhe6144", 00:04:40.511 "ffdhe8192" 00:04:40.511 ] 00:04:40.511 } 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "method": "nvmf_set_max_subsystems", 00:04:40.511 "params": { 00:04:40.511 "max_subsystems": 1024 00:04:40.511 } 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "method": "nvmf_set_crdt", 00:04:40.511 "params": { 00:04:40.511 "crdt1": 0, 00:04:40.511 "crdt2": 0, 00:04:40.511 "crdt3": 0 00:04:40.511 } 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "method": "nvmf_create_transport", 00:04:40.511 "params": { 00:04:40.511 "trtype": "TCP", 00:04:40.511 "max_queue_depth": 128, 00:04:40.511 "max_io_qpairs_per_ctrlr": 127, 00:04:40.511 "in_capsule_data_size": 4096, 00:04:40.511 "max_io_size": 131072, 00:04:40.511 "io_unit_size": 131072, 00:04:40.511 "max_aq_depth": 128, 00:04:40.511 "num_shared_buffers": 511, 00:04:40.511 "buf_cache_size": 4294967295, 00:04:40.511 "dif_insert_or_strip": false, 00:04:40.511 "zcopy": false, 00:04:40.511 "c2h_success": true, 00:04:40.511 "sock_priority": 0, 00:04:40.511 "abort_timeout_sec": 1, 00:04:40.511 "ack_timeout": 0, 00:04:40.511 "data_wr_pool_size": 0 00:04:40.511 } 00:04:40.511 } 00:04:40.511 ] 00:04:40.511 }, 00:04:40.511 { 00:04:40.511 "subsystem": "iscsi", 00:04:40.511 "config": [ 00:04:40.511 { 00:04:40.511 "method": "iscsi_set_options", 00:04:40.511 "params": { 00:04:40.511 "node_base": "iqn.2016-06.io.spdk", 00:04:40.511 "max_sessions": 128, 00:04:40.511 "max_connections_per_session": 2, 00:04:40.511 "max_queue_depth": 64, 00:04:40.511 "default_time2wait": 2, 00:04:40.511 "default_time2retain": 20, 00:04:40.511 "first_burst_length": 8192, 00:04:40.511 "immediate_data": true, 00:04:40.511 "allow_duplicated_isid": false, 00:04:40.511 "error_recovery_level": 0, 00:04:40.511 "nop_timeout": 60, 00:04:40.511 "nop_in_interval": 30, 00:04:40.511 "disable_chap": false, 00:04:40.511 "require_chap": false, 00:04:40.511 "mutual_chap": false, 00:04:40.511 "chap_group": 0, 00:04:40.511 "max_large_datain_per_connection": 64, 00:04:40.511 "max_r2t_per_connection": 4, 00:04:40.511 "pdu_pool_size": 36864, 00:04:40.511 "immediate_data_pool_size": 16384, 00:04:40.511 "data_out_pool_size": 2048 00:04:40.511 } 00:04:40.511 } 00:04:40.511 ] 00:04:40.511 } 00:04:40.511 ] 00:04:40.511 } 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57429 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57429 ']' 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57429 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57429 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.511 killing process with pid 57429 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57429' 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57429 00:04:40.511 13:57:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57429 00:04:41.912 13:57:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57463 00:04:41.912 13:57:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:41.912 13:57:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57463 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57463 ']' 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57463 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57463 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.190 killing process with pid 57463 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57463' 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57463 00:04:47.190 13:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57463 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.129 ************************************ 00:04:48.129 END TEST skip_rpc_with_json 00:04:48.129 ************************************ 00:04:48.129 00:04:48.129 real 0m8.438s 00:04:48.129 user 0m8.042s 00:04:48.129 sys 0m0.573s 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.129 13:57:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:48.129 13:57:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.129 13:57:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.129 13:57:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.129 ************************************ 00:04:48.129 START TEST skip_rpc_with_delay 00:04:48.129 ************************************ 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.129 [2024-12-09 13:57:49.709158] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.129 00:04:48.129 real 0m0.126s 00:04:48.129 user 0m0.077s 00:04:48.129 sys 0m0.048s 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.129 ************************************ 00:04:48.129 END TEST skip_rpc_with_delay 00:04:48.129 ************************************ 00:04:48.129 13:57:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:48.129 13:57:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:48.129 13:57:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:48.129 13:57:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:48.129 13:57:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.129 13:57:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.129 13:57:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.129 ************************************ 00:04:48.129 START TEST exit_on_failed_rpc_init 00:04:48.129 ************************************ 00:04:48.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.129 13:57:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:48.129 13:57:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57586 00:04:48.129 13:57:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57586 00:04:48.129 13:57:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57586 ']' 00:04:48.129 13:57:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.129 13:57:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.129 13:57:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.129 13:57:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.129 13:57:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.129 13:57:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.129 [2024-12-09 13:57:49.897914] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:48.129 [2024-12-09 13:57:49.898029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57586 ] 00:04:48.390 [2024-12-09 13:57:50.056064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.390 [2024-12-09 13:57:50.153494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:48.963 13:57:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.225 [2024-12-09 13:57:50.821081] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:49.225 [2024-12-09 13:57:50.821197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57598 ] 00:04:49.225 [2024-12-09 13:57:50.982837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.485 [2024-12-09 13:57:51.087427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.485 [2024-12-09 13:57:51.087513] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:49.485 [2024-12-09 13:57:51.087527] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:49.485 [2024-12-09 13:57:51.087559] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:49.485 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:49.485 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:49.485 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:49.485 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:49.485 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:49.485 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:49.485 13:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:49.485 13:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57586 00:04:49.485 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57586 ']' 00:04:49.485 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57586 00:04:49.743 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:49.743 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.743 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57586 00:04:49.743 killing process with pid 57586 00:04:49.743 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.743 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.743 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57586' 00:04:49.743 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57586 00:04:49.743 13:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57586 00:04:51.121 00:04:51.121 real 0m2.986s 00:04:51.121 user 0m3.288s 00:04:51.121 sys 0m0.419s 00:04:51.121 ************************************ 00:04:51.121 END TEST exit_on_failed_rpc_init 00:04:51.121 ************************************ 00:04:51.121 13:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.121 13:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.121 13:57:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.121 00:04:51.121 real 0m18.179s 00:04:51.121 user 0m17.319s 00:04:51.121 sys 0m1.576s 00:04:51.121 13:57:52 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.121 ************************************ 00:04:51.121 END TEST skip_rpc 00:04:51.121 ************************************ 00:04:51.121 13:57:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.121 13:57:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:51.121 13:57:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.121 13:57:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.121 13:57:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.382 ************************************ 00:04:51.382 START TEST rpc_client 00:04:51.382 ************************************ 00:04:51.382 13:57:52 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:51.382 * Looking for test storage... 00:04:51.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:51.382 13:57:52 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.382 13:57:52 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.382 13:57:52 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.382 13:57:53 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.382 13:57:53 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:51.382 13:57:53 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.382 13:57:53 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.382 --rc genhtml_branch_coverage=1 00:04:51.382 --rc genhtml_function_coverage=1 00:04:51.382 --rc genhtml_legend=1 00:04:51.382 --rc geninfo_all_blocks=1 00:04:51.382 --rc geninfo_unexecuted_blocks=1 00:04:51.382 00:04:51.382 ' 00:04:51.382 13:57:53 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.382 --rc genhtml_branch_coverage=1 00:04:51.382 --rc genhtml_function_coverage=1 00:04:51.382 --rc genhtml_legend=1 00:04:51.382 --rc geninfo_all_blocks=1 00:04:51.382 --rc geninfo_unexecuted_blocks=1 00:04:51.382 00:04:51.382 ' 00:04:51.382 13:57:53 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.382 --rc genhtml_branch_coverage=1 00:04:51.382 --rc genhtml_function_coverage=1 00:04:51.382 --rc genhtml_legend=1 00:04:51.382 --rc geninfo_all_blocks=1 00:04:51.382 --rc geninfo_unexecuted_blocks=1 00:04:51.382 00:04:51.382 ' 00:04:51.382 13:57:53 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.382 --rc genhtml_branch_coverage=1 00:04:51.382 --rc genhtml_function_coverage=1 00:04:51.382 --rc genhtml_legend=1 00:04:51.382 --rc geninfo_all_blocks=1 00:04:51.382 --rc geninfo_unexecuted_blocks=1 00:04:51.382 00:04:51.382 ' 00:04:51.382 13:57:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:51.382 OK 00:04:51.382 13:57:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.382 00:04:51.382 real 0m0.207s 00:04:51.382 user 0m0.135s 00:04:51.382 sys 0m0.075s 00:04:51.382 13:57:53 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.382 13:57:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:51.382 ************************************ 00:04:51.382 END TEST rpc_client 00:04:51.382 ************************************ 00:04:51.644 13:57:53 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:51.644 13:57:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.644 13:57:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.644 13:57:53 -- common/autotest_common.sh@10 -- # set +x 00:04:51.644 ************************************ 00:04:51.644 START TEST json_config 00:04:51.644 ************************************ 00:04:51.644 13:57:53 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:51.644 13:57:53 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.644 13:57:53 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.644 13:57:53 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.644 13:57:53 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.644 13:57:53 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.644 13:57:53 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.644 13:57:53 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.644 13:57:53 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.644 13:57:53 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.644 13:57:53 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.644 13:57:53 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.644 13:57:53 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.644 13:57:53 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.644 13:57:53 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.644 13:57:53 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.644 13:57:53 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:51.644 13:57:53 json_config -- scripts/common.sh@345 -- # : 1 00:04:51.644 13:57:53 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.644 13:57:53 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.644 13:57:53 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:51.644 13:57:53 json_config -- scripts/common.sh@353 -- # local d=1 00:04:51.644 13:57:53 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.644 13:57:53 json_config -- scripts/common.sh@355 -- # echo 1 00:04:51.644 13:57:53 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.644 13:57:53 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:51.644 13:57:53 json_config -- scripts/common.sh@353 -- # local d=2 00:04:51.644 13:57:53 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.644 13:57:53 json_config -- scripts/common.sh@355 -- # echo 2 00:04:51.644 13:57:53 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.644 13:57:53 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.644 13:57:53 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.644 13:57:53 json_config -- scripts/common.sh@368 -- # return 0 00:04:51.644 13:57:53 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.644 13:57:53 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.644 --rc genhtml_branch_coverage=1 00:04:51.644 --rc genhtml_function_coverage=1 00:04:51.644 --rc genhtml_legend=1 00:04:51.644 --rc geninfo_all_blocks=1 00:04:51.644 --rc geninfo_unexecuted_blocks=1 00:04:51.644 00:04:51.644 ' 00:04:51.644 13:57:53 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.644 --rc genhtml_branch_coverage=1 00:04:51.644 --rc genhtml_function_coverage=1 00:04:51.644 --rc genhtml_legend=1 00:04:51.644 --rc geninfo_all_blocks=1 00:04:51.644 --rc geninfo_unexecuted_blocks=1 00:04:51.644 00:04:51.644 ' 00:04:51.644 13:57:53 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.644 --rc genhtml_branch_coverage=1 00:04:51.644 --rc genhtml_function_coverage=1 00:04:51.644 --rc genhtml_legend=1 00:04:51.644 --rc geninfo_all_blocks=1 00:04:51.644 --rc geninfo_unexecuted_blocks=1 00:04:51.644 00:04:51.644 ' 00:04:51.644 13:57:53 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.644 --rc genhtml_branch_coverage=1 00:04:51.644 --rc genhtml_function_coverage=1 00:04:51.644 --rc genhtml_legend=1 00:04:51.644 --rc geninfo_all_blocks=1 00:04:51.644 --rc geninfo_unexecuted_blocks=1 00:04:51.644 00:04:51.644 ' 00:04:51.644 13:57:53 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9a412fb6-ec4d-4742-888d-917af990c37a 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9a412fb6-ec4d-4742-888d-917af990c37a 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.644 13:57:53 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.644 13:57:53 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.644 13:57:53 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.644 13:57:53 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.644 13:57:53 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.644 13:57:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.645 13:57:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.645 13:57:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.645 13:57:53 json_config -- paths/export.sh@5 -- # export PATH 00:04:51.645 13:57:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.645 13:57:53 json_config -- nvmf/common.sh@51 -- # : 0 00:04:51.645 13:57:53 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.645 13:57:53 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.645 13:57:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.645 13:57:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.645 13:57:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.645 13:57:53 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.645 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.645 13:57:53 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.645 13:57:53 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.645 13:57:53 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.645 13:57:53 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:51.645 13:57:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:51.645 13:57:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:51.645 13:57:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:51.645 13:57:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.645 13:57:53 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:51.645 WARNING: No tests are enabled so not running JSON configuration tests 00:04:51.645 13:57:53 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:51.645 00:04:51.645 real 0m0.141s 00:04:51.645 user 0m0.083s 00:04:51.645 sys 0m0.060s 00:04:51.645 13:57:53 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.645 13:57:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.645 ************************************ 00:04:51.645 END TEST json_config 00:04:51.645 ************************************ 00:04:51.645 13:57:53 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.645 13:57:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.645 13:57:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.645 13:57:53 -- common/autotest_common.sh@10 -- # set +x 00:04:51.645 ************************************ 00:04:51.645 START TEST json_config_extra_key 00:04:51.645 ************************************ 00:04:51.645 13:57:53 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.906 13:57:53 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.906 13:57:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.906 13:57:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.906 13:57:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:51.906 13:57:53 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.906 13:57:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.906 --rc genhtml_branch_coverage=1 00:04:51.906 --rc genhtml_function_coverage=1 00:04:51.906 --rc genhtml_legend=1 00:04:51.906 --rc geninfo_all_blocks=1 00:04:51.906 --rc geninfo_unexecuted_blocks=1 00:04:51.906 00:04:51.906 ' 00:04:51.906 13:57:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.906 --rc genhtml_branch_coverage=1 00:04:51.906 --rc genhtml_function_coverage=1 00:04:51.906 --rc genhtml_legend=1 00:04:51.906 --rc geninfo_all_blocks=1 00:04:51.906 --rc geninfo_unexecuted_blocks=1 00:04:51.906 00:04:51.906 ' 00:04:51.906 13:57:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.906 --rc genhtml_branch_coverage=1 00:04:51.906 --rc genhtml_function_coverage=1 00:04:51.906 --rc genhtml_legend=1 00:04:51.906 --rc geninfo_all_blocks=1 00:04:51.906 --rc geninfo_unexecuted_blocks=1 00:04:51.906 00:04:51.906 ' 00:04:51.906 13:57:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.906 --rc genhtml_branch_coverage=1 00:04:51.906 --rc genhtml_function_coverage=1 00:04:51.906 --rc genhtml_legend=1 00:04:51.906 --rc geninfo_all_blocks=1 00:04:51.906 --rc geninfo_unexecuted_blocks=1 00:04:51.906 00:04:51.906 ' 00:04:51.906 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9a412fb6-ec4d-4742-888d-917af990c37a 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9a412fb6-ec4d-4742-888d-917af990c37a 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.906 13:57:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.906 13:57:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.907 13:57:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.907 13:57:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.907 13:57:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.907 13:57:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.907 13:57:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.907 13:57:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:51.907 13:57:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.907 13:57:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:51.907 13:57:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.907 13:57:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.907 13:57:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.907 13:57:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.907 13:57:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.907 13:57:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.907 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.907 13:57:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.907 13:57:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.907 13:57:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.907 INFO: launching applications... 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:51.907 13:57:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57797 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.907 Waiting for target to run... 00:04:51.907 13:57:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57797 /var/tmp/spdk_tgt.sock 00:04:51.907 13:57:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57797 ']' 00:04:51.907 13:57:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.907 13:57:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.907 13:57:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.907 13:57:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.907 13:57:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.907 [2024-12-09 13:57:53.636213] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:51.907 [2024-12-09 13:57:53.636366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57797 ] 00:04:52.480 [2024-12-09 13:57:53.990723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.480 [2024-12-09 13:57:54.106205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.052 13:57:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.052 13:57:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:53.053 13:57:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:53.053 00:04:53.053 INFO: shutting down applications... 00:04:53.053 13:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:53.053 13:57:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:53.053 13:57:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:53.053 13:57:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.053 13:57:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57797 ]] 00:04:53.053 13:57:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57797 00:04:53.053 13:57:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.053 13:57:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.053 13:57:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:04:53.053 13:57:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.623 13:57:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.623 13:57:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.623 13:57:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:04:53.623 13:57:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.916 13:57:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.916 13:57:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.916 13:57:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:04:53.916 13:57:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.498 13:57:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.498 13:57:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.498 13:57:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:04:54.498 13:57:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.065 13:57:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.065 13:57:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.065 13:57:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:04:55.065 SPDK target shutdown done 00:04:55.065 Success 00:04:55.065 13:57:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:55.065 13:57:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:55.065 13:57:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:55.065 13:57:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:55.065 13:57:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:55.065 00:04:55.065 real 0m3.275s 00:04:55.065 user 0m2.933s 00:04:55.065 sys 0m0.478s 00:04:55.065 ************************************ 00:04:55.065 END TEST json_config_extra_key 00:04:55.065 ************************************ 00:04:55.065 13:57:56 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.065 13:57:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.065 13:57:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.065 13:57:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.065 13:57:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.065 13:57:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.065 ************************************ 00:04:55.065 START TEST alias_rpc 00:04:55.065 ************************************ 00:04:55.066 13:57:56 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.066 * Looking for test storage... 00:04:55.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:55.066 13:57:56 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.066 13:57:56 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.066 13:57:56 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.323 13:57:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.323 --rc genhtml_branch_coverage=1 00:04:55.323 --rc genhtml_function_coverage=1 00:04:55.323 --rc genhtml_legend=1 00:04:55.323 --rc geninfo_all_blocks=1 00:04:55.323 --rc geninfo_unexecuted_blocks=1 00:04:55.323 00:04:55.323 ' 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.323 --rc genhtml_branch_coverage=1 00:04:55.323 --rc genhtml_function_coverage=1 00:04:55.323 --rc genhtml_legend=1 00:04:55.323 --rc geninfo_all_blocks=1 00:04:55.323 --rc geninfo_unexecuted_blocks=1 00:04:55.323 00:04:55.323 ' 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.323 --rc genhtml_branch_coverage=1 00:04:55.323 --rc genhtml_function_coverage=1 00:04:55.323 --rc genhtml_legend=1 00:04:55.323 --rc geninfo_all_blocks=1 00:04:55.323 --rc geninfo_unexecuted_blocks=1 00:04:55.323 00:04:55.323 ' 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.323 --rc genhtml_branch_coverage=1 00:04:55.323 --rc genhtml_function_coverage=1 00:04:55.323 --rc genhtml_legend=1 00:04:55.323 --rc geninfo_all_blocks=1 00:04:55.323 --rc geninfo_unexecuted_blocks=1 00:04:55.323 00:04:55.323 ' 00:04:55.323 13:57:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:55.323 13:57:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57890 00:04:55.323 13:57:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.323 13:57:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57890 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57890 ']' 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.323 13:57:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.323 [2024-12-09 13:57:56.964258] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:55.323 [2024-12-09 13:57:56.964359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57890 ] 00:04:55.581 [2024-12-09 13:57:57.123829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.581 [2024-12-09 13:57:57.218210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.147 13:57:57 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.147 13:57:57 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:56.147 13:57:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:56.405 13:57:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57890 00:04:56.405 13:57:58 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57890 ']' 00:04:56.405 13:57:58 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57890 00:04:56.405 13:57:58 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:56.405 13:57:58 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.405 13:57:58 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57890 00:04:56.405 killing process with pid 57890 00:04:56.405 13:57:58 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.405 13:57:58 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.405 13:57:58 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57890' 00:04:56.405 13:57:58 alias_rpc -- common/autotest_common.sh@973 -- # kill 57890 00:04:56.405 13:57:58 alias_rpc -- common/autotest_common.sh@978 -- # wait 57890 00:04:57.782 ************************************ 00:04:57.782 END TEST alias_rpc 00:04:57.782 ************************************ 00:04:57.782 00:04:57.782 real 0m2.745s 00:04:57.782 user 0m2.849s 00:04:57.782 sys 0m0.411s 00:04:57.782 13:57:59 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.782 13:57:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.782 13:57:59 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:57.782 13:57:59 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:57.782 13:57:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.782 13:57:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.782 13:57:59 -- common/autotest_common.sh@10 -- # set +x 00:04:57.782 ************************************ 00:04:57.782 START TEST spdkcli_tcp 00:04:57.782 ************************************ 00:04:57.782 13:57:59 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:57.782 * Looking for test storage... 00:04:57.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:57.782 13:57:59 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.782 13:57:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.782 13:57:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.040 13:57:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.040 13:57:59 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.041 13:57:59 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.041 --rc genhtml_branch_coverage=1 00:04:58.041 --rc genhtml_function_coverage=1 00:04:58.041 --rc genhtml_legend=1 00:04:58.041 --rc geninfo_all_blocks=1 00:04:58.041 --rc geninfo_unexecuted_blocks=1 00:04:58.041 00:04:58.041 ' 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.041 --rc genhtml_branch_coverage=1 00:04:58.041 --rc genhtml_function_coverage=1 00:04:58.041 --rc genhtml_legend=1 00:04:58.041 --rc geninfo_all_blocks=1 00:04:58.041 --rc geninfo_unexecuted_blocks=1 00:04:58.041 00:04:58.041 ' 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.041 --rc genhtml_branch_coverage=1 00:04:58.041 --rc genhtml_function_coverage=1 00:04:58.041 --rc genhtml_legend=1 00:04:58.041 --rc geninfo_all_blocks=1 00:04:58.041 --rc geninfo_unexecuted_blocks=1 00:04:58.041 00:04:58.041 ' 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.041 --rc genhtml_branch_coverage=1 00:04:58.041 --rc genhtml_function_coverage=1 00:04:58.041 --rc genhtml_legend=1 00:04:58.041 --rc geninfo_all_blocks=1 00:04:58.041 --rc geninfo_unexecuted_blocks=1 00:04:58.041 00:04:58.041 ' 00:04:58.041 13:57:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:58.041 13:57:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:58.041 13:57:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:58.041 13:57:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.041 13:57:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.041 13:57:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.041 13:57:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.041 13:57:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57986 00:04:58.041 13:57:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57986 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57986 ']' 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.041 13:57:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.041 13:57:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.041 [2024-12-09 13:57:59.721289] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:58.041 [2024-12-09 13:57:59.721562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57986 ] 00:04:58.299 [2024-12-09 13:57:59.877555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.299 [2024-12-09 13:57:59.976013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.299 [2024-12-09 13:57:59.976021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.866 13:58:00 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.866 13:58:00 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:58.866 13:58:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:58.866 13:58:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57998 00:04:58.866 13:58:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:59.125 [ 00:04:59.125 "bdev_malloc_delete", 00:04:59.125 "bdev_malloc_create", 00:04:59.125 "bdev_null_resize", 00:04:59.125 "bdev_null_delete", 00:04:59.125 "bdev_null_create", 00:04:59.125 "bdev_nvme_cuse_unregister", 00:04:59.125 "bdev_nvme_cuse_register", 00:04:59.125 "bdev_opal_new_user", 00:04:59.125 "bdev_opal_set_lock_state", 00:04:59.125 "bdev_opal_delete", 00:04:59.125 "bdev_opal_get_info", 00:04:59.125 "bdev_opal_create", 00:04:59.125 "bdev_nvme_opal_revert", 00:04:59.125 "bdev_nvme_opal_init", 00:04:59.125 "bdev_nvme_send_cmd", 00:04:59.125 "bdev_nvme_set_keys", 00:04:59.125 "bdev_nvme_get_path_iostat", 00:04:59.125 "bdev_nvme_get_mdns_discovery_info", 00:04:59.125 "bdev_nvme_stop_mdns_discovery", 00:04:59.125 "bdev_nvme_start_mdns_discovery", 00:04:59.125 "bdev_nvme_set_multipath_policy", 00:04:59.125 "bdev_nvme_set_preferred_path", 00:04:59.125 "bdev_nvme_get_io_paths", 00:04:59.125 "bdev_nvme_remove_error_injection", 00:04:59.125 "bdev_nvme_add_error_injection", 00:04:59.125 "bdev_nvme_get_discovery_info", 00:04:59.125 "bdev_nvme_stop_discovery", 00:04:59.125 "bdev_nvme_start_discovery", 00:04:59.125 "bdev_nvme_get_controller_health_info", 00:04:59.125 "bdev_nvme_disable_controller", 00:04:59.125 "bdev_nvme_enable_controller", 00:04:59.125 "bdev_nvme_reset_controller", 00:04:59.125 "bdev_nvme_get_transport_statistics", 00:04:59.125 "bdev_nvme_apply_firmware", 00:04:59.125 "bdev_nvme_detach_controller", 00:04:59.125 "bdev_nvme_get_controllers", 00:04:59.125 "bdev_nvme_attach_controller", 00:04:59.125 "bdev_nvme_set_hotplug", 00:04:59.125 "bdev_nvme_set_options", 00:04:59.125 "bdev_passthru_delete", 00:04:59.125 "bdev_passthru_create", 00:04:59.125 "bdev_lvol_set_parent_bdev", 00:04:59.125 "bdev_lvol_set_parent", 00:04:59.125 "bdev_lvol_check_shallow_copy", 00:04:59.125 "bdev_lvol_start_shallow_copy", 00:04:59.125 "bdev_lvol_grow_lvstore", 00:04:59.125 "bdev_lvol_get_lvols", 00:04:59.125 "bdev_lvol_get_lvstores", 00:04:59.125 "bdev_lvol_delete", 00:04:59.125 "bdev_lvol_set_read_only", 00:04:59.125 "bdev_lvol_resize", 00:04:59.125 "bdev_lvol_decouple_parent", 00:04:59.125 "bdev_lvol_inflate", 00:04:59.125 "bdev_lvol_rename", 00:04:59.125 "bdev_lvol_clone_bdev", 00:04:59.125 "bdev_lvol_clone", 00:04:59.125 "bdev_lvol_snapshot", 00:04:59.125 "bdev_lvol_create", 00:04:59.125 "bdev_lvol_delete_lvstore", 00:04:59.125 "bdev_lvol_rename_lvstore", 00:04:59.125 "bdev_lvol_create_lvstore", 00:04:59.125 "bdev_raid_set_options", 00:04:59.125 "bdev_raid_remove_base_bdev", 00:04:59.125 "bdev_raid_add_base_bdev", 00:04:59.125 "bdev_raid_delete", 00:04:59.125 "bdev_raid_create", 00:04:59.125 "bdev_raid_get_bdevs", 00:04:59.125 "bdev_error_inject_error", 00:04:59.125 "bdev_error_delete", 00:04:59.125 "bdev_error_create", 00:04:59.125 "bdev_split_delete", 00:04:59.125 "bdev_split_create", 00:04:59.125 "bdev_delay_delete", 00:04:59.125 "bdev_delay_create", 00:04:59.125 "bdev_delay_update_latency", 00:04:59.125 "bdev_zone_block_delete", 00:04:59.125 "bdev_zone_block_create", 00:04:59.125 "blobfs_create", 00:04:59.125 "blobfs_detect", 00:04:59.125 "blobfs_set_cache_size", 00:04:59.125 "bdev_xnvme_delete", 00:04:59.125 "bdev_xnvme_create", 00:04:59.125 "bdev_aio_delete", 00:04:59.125 "bdev_aio_rescan", 00:04:59.125 "bdev_aio_create", 00:04:59.125 "bdev_ftl_set_property", 00:04:59.125 "bdev_ftl_get_properties", 00:04:59.125 "bdev_ftl_get_stats", 00:04:59.125 "bdev_ftl_unmap", 00:04:59.125 "bdev_ftl_unload", 00:04:59.125 "bdev_ftl_delete", 00:04:59.125 "bdev_ftl_load", 00:04:59.125 "bdev_ftl_create", 00:04:59.125 "bdev_virtio_attach_controller", 00:04:59.125 "bdev_virtio_scsi_get_devices", 00:04:59.125 "bdev_virtio_detach_controller", 00:04:59.125 "bdev_virtio_blk_set_hotplug", 00:04:59.125 "bdev_iscsi_delete", 00:04:59.125 "bdev_iscsi_create", 00:04:59.125 "bdev_iscsi_set_options", 00:04:59.125 "accel_error_inject_error", 00:04:59.125 "ioat_scan_accel_module", 00:04:59.125 "dsa_scan_accel_module", 00:04:59.125 "iaa_scan_accel_module", 00:04:59.125 "keyring_file_remove_key", 00:04:59.125 "keyring_file_add_key", 00:04:59.125 "keyring_linux_set_options", 00:04:59.125 "fsdev_aio_delete", 00:04:59.125 "fsdev_aio_create", 00:04:59.125 "iscsi_get_histogram", 00:04:59.125 "iscsi_enable_histogram", 00:04:59.126 "iscsi_set_options", 00:04:59.126 "iscsi_get_auth_groups", 00:04:59.126 "iscsi_auth_group_remove_secret", 00:04:59.126 "iscsi_auth_group_add_secret", 00:04:59.126 "iscsi_delete_auth_group", 00:04:59.126 "iscsi_create_auth_group", 00:04:59.126 "iscsi_set_discovery_auth", 00:04:59.126 "iscsi_get_options", 00:04:59.126 "iscsi_target_node_request_logout", 00:04:59.126 "iscsi_target_node_set_redirect", 00:04:59.126 "iscsi_target_node_set_auth", 00:04:59.126 "iscsi_target_node_add_lun", 00:04:59.126 "iscsi_get_stats", 00:04:59.126 "iscsi_get_connections", 00:04:59.126 "iscsi_portal_group_set_auth", 00:04:59.126 "iscsi_start_portal_group", 00:04:59.126 "iscsi_delete_portal_group", 00:04:59.126 "iscsi_create_portal_group", 00:04:59.126 "iscsi_get_portal_groups", 00:04:59.126 "iscsi_delete_target_node", 00:04:59.126 "iscsi_target_node_remove_pg_ig_maps", 00:04:59.126 "iscsi_target_node_add_pg_ig_maps", 00:04:59.126 "iscsi_create_target_node", 00:04:59.126 "iscsi_get_target_nodes", 00:04:59.126 "iscsi_delete_initiator_group", 00:04:59.126 "iscsi_initiator_group_remove_initiators", 00:04:59.126 "iscsi_initiator_group_add_initiators", 00:04:59.126 "iscsi_create_initiator_group", 00:04:59.126 "iscsi_get_initiator_groups", 00:04:59.126 "nvmf_set_crdt", 00:04:59.126 "nvmf_set_config", 00:04:59.126 "nvmf_set_max_subsystems", 00:04:59.126 "nvmf_stop_mdns_prr", 00:04:59.126 "nvmf_publish_mdns_prr", 00:04:59.126 "nvmf_subsystem_get_listeners", 00:04:59.126 "nvmf_subsystem_get_qpairs", 00:04:59.126 "nvmf_subsystem_get_controllers", 00:04:59.126 "nvmf_get_stats", 00:04:59.126 "nvmf_get_transports", 00:04:59.126 "nvmf_create_transport", 00:04:59.126 "nvmf_get_targets", 00:04:59.126 "nvmf_delete_target", 00:04:59.126 "nvmf_create_target", 00:04:59.126 "nvmf_subsystem_allow_any_host", 00:04:59.126 "nvmf_subsystem_set_keys", 00:04:59.126 "nvmf_subsystem_remove_host", 00:04:59.126 "nvmf_subsystem_add_host", 00:04:59.126 "nvmf_ns_remove_host", 00:04:59.126 "nvmf_ns_add_host", 00:04:59.126 "nvmf_subsystem_remove_ns", 00:04:59.126 "nvmf_subsystem_set_ns_ana_group", 00:04:59.126 "nvmf_subsystem_add_ns", 00:04:59.126 "nvmf_subsystem_listener_set_ana_state", 00:04:59.126 "nvmf_discovery_get_referrals", 00:04:59.126 "nvmf_discovery_remove_referral", 00:04:59.126 "nvmf_discovery_add_referral", 00:04:59.126 "nvmf_subsystem_remove_listener", 00:04:59.126 "nvmf_subsystem_add_listener", 00:04:59.126 "nvmf_delete_subsystem", 00:04:59.126 "nvmf_create_subsystem", 00:04:59.126 "nvmf_get_subsystems", 00:04:59.126 "env_dpdk_get_mem_stats", 00:04:59.126 "nbd_get_disks", 00:04:59.126 "nbd_stop_disk", 00:04:59.126 "nbd_start_disk", 00:04:59.126 "ublk_recover_disk", 00:04:59.126 "ublk_get_disks", 00:04:59.126 "ublk_stop_disk", 00:04:59.126 "ublk_start_disk", 00:04:59.126 "ublk_destroy_target", 00:04:59.126 "ublk_create_target", 00:04:59.126 "virtio_blk_create_transport", 00:04:59.126 "virtio_blk_get_transports", 00:04:59.126 "vhost_controller_set_coalescing", 00:04:59.126 "vhost_get_controllers", 00:04:59.126 "vhost_delete_controller", 00:04:59.126 "vhost_create_blk_controller", 00:04:59.126 "vhost_scsi_controller_remove_target", 00:04:59.126 "vhost_scsi_controller_add_target", 00:04:59.126 "vhost_start_scsi_controller", 00:04:59.126 "vhost_create_scsi_controller", 00:04:59.126 "thread_set_cpumask", 00:04:59.126 "scheduler_set_options", 00:04:59.126 "framework_get_governor", 00:04:59.126 "framework_get_scheduler", 00:04:59.126 "framework_set_scheduler", 00:04:59.126 "framework_get_reactors", 00:04:59.126 "thread_get_io_channels", 00:04:59.126 "thread_get_pollers", 00:04:59.126 "thread_get_stats", 00:04:59.126 "framework_monitor_context_switch", 00:04:59.126 "spdk_kill_instance", 00:04:59.126 "log_enable_timestamps", 00:04:59.126 "log_get_flags", 00:04:59.126 "log_clear_flag", 00:04:59.126 "log_set_flag", 00:04:59.126 "log_get_level", 00:04:59.126 "log_set_level", 00:04:59.126 "log_get_print_level", 00:04:59.126 "log_set_print_level", 00:04:59.126 "framework_enable_cpumask_locks", 00:04:59.126 "framework_disable_cpumask_locks", 00:04:59.126 "framework_wait_init", 00:04:59.126 "framework_start_init", 00:04:59.126 "scsi_get_devices", 00:04:59.126 "bdev_get_histogram", 00:04:59.126 "bdev_enable_histogram", 00:04:59.126 "bdev_set_qos_limit", 00:04:59.126 "bdev_set_qd_sampling_period", 00:04:59.126 "bdev_get_bdevs", 00:04:59.126 "bdev_reset_iostat", 00:04:59.126 "bdev_get_iostat", 00:04:59.126 "bdev_examine", 00:04:59.126 "bdev_wait_for_examine", 00:04:59.126 "bdev_set_options", 00:04:59.126 "accel_get_stats", 00:04:59.126 "accel_set_options", 00:04:59.126 "accel_set_driver", 00:04:59.126 "accel_crypto_key_destroy", 00:04:59.126 "accel_crypto_keys_get", 00:04:59.126 "accel_crypto_key_create", 00:04:59.126 "accel_assign_opc", 00:04:59.126 "accel_get_module_info", 00:04:59.126 "accel_get_opc_assignments", 00:04:59.126 "vmd_rescan", 00:04:59.126 "vmd_remove_device", 00:04:59.126 "vmd_enable", 00:04:59.126 "sock_get_default_impl", 00:04:59.126 "sock_set_default_impl", 00:04:59.126 "sock_impl_set_options", 00:04:59.126 "sock_impl_get_options", 00:04:59.126 "iobuf_get_stats", 00:04:59.126 "iobuf_set_options", 00:04:59.126 "keyring_get_keys", 00:04:59.126 "framework_get_pci_devices", 00:04:59.126 "framework_get_config", 00:04:59.126 "framework_get_subsystems", 00:04:59.126 "fsdev_set_opts", 00:04:59.126 "fsdev_get_opts", 00:04:59.126 "trace_get_info", 00:04:59.126 "trace_get_tpoint_group_mask", 00:04:59.126 "trace_disable_tpoint_group", 00:04:59.126 "trace_enable_tpoint_group", 00:04:59.126 "trace_clear_tpoint_mask", 00:04:59.126 "trace_set_tpoint_mask", 00:04:59.126 "notify_get_notifications", 00:04:59.126 "notify_get_types", 00:04:59.126 "spdk_get_version", 00:04:59.126 "rpc_get_methods" 00:04:59.126 ] 00:04:59.126 13:58:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.126 13:58:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:59.126 13:58:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57986 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57986 ']' 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57986 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57986 00:04:59.126 killing process with pid 57986 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57986' 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57986 00:04:59.126 13:58:00 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57986 00:05:01.030 ************************************ 00:05:01.030 END TEST spdkcli_tcp 00:05:01.030 ************************************ 00:05:01.030 00:05:01.030 real 0m2.817s 00:05:01.030 user 0m5.066s 00:05:01.030 sys 0m0.415s 00:05:01.030 13:58:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.030 13:58:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.030 13:58:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.030 13:58:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.030 13:58:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.030 13:58:02 -- common/autotest_common.sh@10 -- # set +x 00:05:01.030 ************************************ 00:05:01.030 START TEST dpdk_mem_utility 00:05:01.030 ************************************ 00:05:01.030 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.030 * Looking for test storage... 00:05:01.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:01.030 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.030 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.030 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.030 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:01.030 13:58:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:01.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.031 13:58:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.031 --rc genhtml_branch_coverage=1 00:05:01.031 --rc genhtml_function_coverage=1 00:05:01.031 --rc genhtml_legend=1 00:05:01.031 --rc geninfo_all_blocks=1 00:05:01.031 --rc geninfo_unexecuted_blocks=1 00:05:01.031 00:05:01.031 ' 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.031 --rc genhtml_branch_coverage=1 00:05:01.031 --rc genhtml_function_coverage=1 00:05:01.031 --rc genhtml_legend=1 00:05:01.031 --rc geninfo_all_blocks=1 00:05:01.031 --rc geninfo_unexecuted_blocks=1 00:05:01.031 00:05:01.031 ' 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.031 --rc genhtml_branch_coverage=1 00:05:01.031 --rc genhtml_function_coverage=1 00:05:01.031 --rc genhtml_legend=1 00:05:01.031 --rc geninfo_all_blocks=1 00:05:01.031 --rc geninfo_unexecuted_blocks=1 00:05:01.031 00:05:01.031 ' 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.031 --rc genhtml_branch_coverage=1 00:05:01.031 --rc genhtml_function_coverage=1 00:05:01.031 --rc genhtml_legend=1 00:05:01.031 --rc geninfo_all_blocks=1 00:05:01.031 --rc geninfo_unexecuted_blocks=1 00:05:01.031 00:05:01.031 ' 00:05:01.031 13:58:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:01.031 13:58:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58092 00:05:01.031 13:58:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58092 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58092 ']' 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.031 13:58:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.031 13:58:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.031 [2024-12-09 13:58:02.574951] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:01.031 [2024-12-09 13:58:02.575066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58092 ] 00:05:01.031 [2024-12-09 13:58:02.729900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.291 [2024-12-09 13:58:02.825424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.863 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.863 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:01.863 13:58:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:01.863 13:58:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:01.863 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.863 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.863 { 00:05:01.863 "filename": "/tmp/spdk_mem_dump.txt" 00:05:01.863 } 00:05:01.863 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.863 13:58:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:01.863 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:01.863 1 heaps totaling size 824.000000 MiB 00:05:01.863 size: 824.000000 MiB heap id: 0 00:05:01.863 end heaps---------- 00:05:01.863 9 mempools totaling size 603.782043 MiB 00:05:01.863 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:01.863 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:01.863 size: 100.555481 MiB name: bdev_io_58092 00:05:01.863 size: 50.003479 MiB name: msgpool_58092 00:05:01.863 size: 36.509338 MiB name: fsdev_io_58092 00:05:01.863 size: 21.763794 MiB name: PDU_Pool 00:05:01.863 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:01.863 size: 4.133484 MiB name: evtpool_58092 00:05:01.863 size: 0.026123 MiB name: Session_Pool 00:05:01.863 end mempools------- 00:05:01.863 6 memzones totaling size 4.142822 MiB 00:05:01.863 size: 1.000366 MiB name: RG_ring_0_58092 00:05:01.863 size: 1.000366 MiB name: RG_ring_1_58092 00:05:01.863 size: 1.000366 MiB name: RG_ring_4_58092 00:05:01.863 size: 1.000366 MiB name: RG_ring_5_58092 00:05:01.863 size: 0.125366 MiB name: RG_ring_2_58092 00:05:01.863 size: 0.015991 MiB name: RG_ring_3_58092 00:05:01.863 end memzones------- 00:05:01.863 13:58:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:01.863 heap id: 0 total size: 824.000000 MiB number of busy elements: 326 number of free elements: 18 00:05:01.863 list of free elements. size: 16.778687 MiB 00:05:01.863 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:01.863 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:01.863 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:01.863 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:01.863 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:01.863 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:01.863 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:01.863 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:01.863 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:01.863 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:01.863 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:01.863 element at address: 0x20001b400000 with size: 0.559265 MiB 00:05:01.863 element at address: 0x200000c00000 with size: 0.489441 MiB 00:05:01.863 element at address: 0x200019600000 with size: 0.488220 MiB 00:05:01.863 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:01.863 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:01.863 element at address: 0x200028800000 with size: 0.390686 MiB 00:05:01.863 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:01.863 list of standard malloc elements. size: 199.290405 MiB 00:05:01.863 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:01.863 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:01.863 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:01.863 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:01.863 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:01.863 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:01.863 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:01.863 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:01.863 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:01.863 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:01.863 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:01.863 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:01.863 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:01.864 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:01.865 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:01.865 element at address: 0x200028864140 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886ae00 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:01.865 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:01.865 list of memzone associated elements. size: 607.930908 MiB 00:05:01.865 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:01.865 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:01.865 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:01.865 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:01.865 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:01.865 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58092_0 00:05:01.865 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:01.865 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58092_0 00:05:01.865 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:01.865 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58092_0 00:05:01.865 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:01.866 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:01.866 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:01.866 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:01.866 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:01.866 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58092_0 00:05:01.866 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:01.866 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58092 00:05:01.866 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:01.866 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58092 00:05:01.866 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:01.866 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:01.866 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:01.866 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:01.866 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:01.866 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:01.866 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:01.866 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:01.866 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:01.866 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58092 00:05:01.866 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:01.866 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58092 00:05:01.866 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:01.866 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58092 00:05:01.866 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:01.866 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58092 00:05:01.866 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:01.866 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58092 00:05:01.866 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:01.866 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58092 00:05:01.866 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:01.866 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:01.866 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:01.866 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:01.866 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:01.866 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:01.866 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:01.866 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58092 00:05:01.866 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:01.866 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58092 00:05:01.866 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:01.866 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:01.866 element at address: 0x200028864240 with size: 0.023804 MiB 00:05:01.866 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:01.866 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:01.866 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58092 00:05:01.866 element at address: 0x20002886a3c0 with size: 0.002502 MiB 00:05:01.866 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:01.866 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:01.866 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58092 00:05:01.866 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:01.866 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58092 00:05:01.866 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:01.866 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58092 00:05:01.866 element at address: 0x20002886af00 with size: 0.000366 MiB 00:05:01.866 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:01.866 13:58:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:01.866 13:58:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58092 00:05:01.866 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58092 ']' 00:05:01.866 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58092 00:05:01.866 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:01.866 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.866 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58092 00:05:01.866 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.866 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.866 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58092' 00:05:01.866 killing process with pid 58092 00:05:01.866 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58092 00:05:01.866 13:58:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58092 00:05:03.775 00:05:03.775 real 0m2.834s 00:05:03.775 user 0m2.835s 00:05:03.775 sys 0m0.417s 00:05:03.775 13:58:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.775 ************************************ 00:05:03.775 END TEST dpdk_mem_utility 00:05:03.775 ************************************ 00:05:03.775 13:58:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.775 13:58:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.775 13:58:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.775 13:58:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.775 13:58:05 -- common/autotest_common.sh@10 -- # set +x 00:05:03.775 ************************************ 00:05:03.775 START TEST event 00:05:03.775 ************************************ 00:05:03.775 13:58:05 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.775 * Looking for test storage... 00:05:03.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:03.775 13:58:05 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.775 13:58:05 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.775 13:58:05 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.775 13:58:05 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.775 13:58:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.775 13:58:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.775 13:58:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.775 13:58:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.775 13:58:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.775 13:58:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.775 13:58:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.775 13:58:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.775 13:58:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.775 13:58:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.775 13:58:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.775 13:58:05 event -- scripts/common.sh@344 -- # case "$op" in 00:05:03.775 13:58:05 event -- scripts/common.sh@345 -- # : 1 00:05:03.775 13:58:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.775 13:58:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.775 13:58:05 event -- scripts/common.sh@365 -- # decimal 1 00:05:03.775 13:58:05 event -- scripts/common.sh@353 -- # local d=1 00:05:03.775 13:58:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.775 13:58:05 event -- scripts/common.sh@355 -- # echo 1 00:05:03.775 13:58:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.775 13:58:05 event -- scripts/common.sh@366 -- # decimal 2 00:05:03.775 13:58:05 event -- scripts/common.sh@353 -- # local d=2 00:05:03.775 13:58:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.775 13:58:05 event -- scripts/common.sh@355 -- # echo 2 00:05:03.775 13:58:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.775 13:58:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.776 13:58:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.776 13:58:05 event -- scripts/common.sh@368 -- # return 0 00:05:03.776 13:58:05 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.776 13:58:05 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.776 --rc genhtml_branch_coverage=1 00:05:03.776 --rc genhtml_function_coverage=1 00:05:03.776 --rc genhtml_legend=1 00:05:03.776 --rc geninfo_all_blocks=1 00:05:03.776 --rc geninfo_unexecuted_blocks=1 00:05:03.776 00:05:03.776 ' 00:05:03.776 13:58:05 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.776 --rc genhtml_branch_coverage=1 00:05:03.776 --rc genhtml_function_coverage=1 00:05:03.776 --rc genhtml_legend=1 00:05:03.776 --rc geninfo_all_blocks=1 00:05:03.776 --rc geninfo_unexecuted_blocks=1 00:05:03.776 00:05:03.776 ' 00:05:03.776 13:58:05 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.776 --rc genhtml_branch_coverage=1 00:05:03.776 --rc genhtml_function_coverage=1 00:05:03.776 --rc genhtml_legend=1 00:05:03.776 --rc geninfo_all_blocks=1 00:05:03.776 --rc geninfo_unexecuted_blocks=1 00:05:03.776 00:05:03.776 ' 00:05:03.776 13:58:05 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.776 --rc genhtml_branch_coverage=1 00:05:03.776 --rc genhtml_function_coverage=1 00:05:03.776 --rc genhtml_legend=1 00:05:03.776 --rc geninfo_all_blocks=1 00:05:03.776 --rc geninfo_unexecuted_blocks=1 00:05:03.776 00:05:03.776 ' 00:05:03.776 13:58:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:03.776 13:58:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:03.776 13:58:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.776 13:58:05 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:03.776 13:58:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.776 13:58:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.776 ************************************ 00:05:03.776 START TEST event_perf 00:05:03.776 ************************************ 00:05:03.776 13:58:05 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.776 Running I/O for 1 seconds...[2024-12-09 13:58:05.436177] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:03.776 [2024-12-09 13:58:05.436371] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58189 ] 00:05:04.036 [2024-12-09 13:58:05.589846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.036 [2024-12-09 13:58:05.690898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.036 [2024-12-09 13:58:05.691204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.036 Running I/O for 1 seconds...[2024-12-09 13:58:05.691530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.036 [2024-12-09 13:58:05.691572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.475 00:05:05.475 lcore 0: 194758 00:05:05.475 lcore 1: 194752 00:05:05.475 lcore 2: 194753 00:05:05.475 lcore 3: 194756 00:05:05.475 done. 00:05:05.475 00:05:05.475 real 0m1.453s 00:05:05.475 user 0m4.255s 00:05:05.475 sys 0m0.079s 00:05:05.475 13:58:06 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.475 13:58:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.475 ************************************ 00:05:05.475 END TEST event_perf 00:05:05.475 ************************************ 00:05:05.475 13:58:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.475 13:58:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.475 13:58:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.475 13:58:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.475 ************************************ 00:05:05.475 START TEST event_reactor 00:05:05.475 ************************************ 00:05:05.475 13:58:06 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.475 [2024-12-09 13:58:06.951131] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:05.475 [2024-12-09 13:58:06.951352] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58228 ] 00:05:05.475 [2024-12-09 13:58:07.111184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.475 [2024-12-09 13:58:07.211481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.872 test_start 00:05:06.872 oneshot 00:05:06.872 tick 100 00:05:06.872 tick 100 00:05:06.872 tick 250 00:05:06.872 tick 100 00:05:06.872 tick 100 00:05:06.872 tick 100 00:05:06.872 tick 250 00:05:06.872 tick 500 00:05:06.872 tick 100 00:05:06.872 tick 100 00:05:06.872 tick 250 00:05:06.872 tick 100 00:05:06.872 tick 100 00:05:06.872 test_end 00:05:06.872 00:05:06.872 real 0m1.447s 00:05:06.872 user 0m1.263s 00:05:06.872 sys 0m0.076s 00:05:06.872 13:58:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.872 13:58:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:06.872 ************************************ 00:05:06.872 END TEST event_reactor 00:05:06.872 ************************************ 00:05:06.872 13:58:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.872 13:58:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:06.872 13:58:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.872 13:58:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.872 ************************************ 00:05:06.872 START TEST event_reactor_perf 00:05:06.872 ************************************ 00:05:06.872 13:58:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.872 [2024-12-09 13:58:08.440686] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:06.872 [2024-12-09 13:58:08.440792] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58265 ] 00:05:06.872 [2024-12-09 13:58:08.600030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.130 [2024-12-09 13:58:08.696503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.064 test_start 00:05:08.064 test_end 00:05:08.064 Performance: 319242 events per second 00:05:08.064 00:05:08.064 real 0m1.437s 00:05:08.064 user 0m1.257s 00:05:08.064 sys 0m0.073s 00:05:08.064 13:58:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.064 13:58:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.064 ************************************ 00:05:08.064 END TEST event_reactor_perf 00:05:08.064 ************************************ 00:05:08.360 13:58:09 event -- event/event.sh@49 -- # uname -s 00:05:08.360 13:58:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.360 13:58:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.360 13:58:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.360 13:58:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.360 13:58:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.360 ************************************ 00:05:08.360 START TEST event_scheduler 00:05:08.360 ************************************ 00:05:08.360 13:58:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.360 * Looking for test storage... 00:05:08.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:08.360 13:58:09 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.360 13:58:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.360 13:58:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.360 13:58:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.360 --rc genhtml_branch_coverage=1 00:05:08.360 --rc genhtml_function_coverage=1 00:05:08.360 --rc genhtml_legend=1 00:05:08.360 --rc geninfo_all_blocks=1 00:05:08.360 --rc geninfo_unexecuted_blocks=1 00:05:08.360 00:05:08.360 ' 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.360 --rc genhtml_branch_coverage=1 00:05:08.360 --rc genhtml_function_coverage=1 00:05:08.360 --rc genhtml_legend=1 00:05:08.360 --rc geninfo_all_blocks=1 00:05:08.360 --rc geninfo_unexecuted_blocks=1 00:05:08.360 00:05:08.360 ' 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.360 --rc genhtml_branch_coverage=1 00:05:08.360 --rc genhtml_function_coverage=1 00:05:08.360 --rc genhtml_legend=1 00:05:08.360 --rc geninfo_all_blocks=1 00:05:08.360 --rc geninfo_unexecuted_blocks=1 00:05:08.360 00:05:08.360 ' 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.360 --rc genhtml_branch_coverage=1 00:05:08.360 --rc genhtml_function_coverage=1 00:05:08.360 --rc genhtml_legend=1 00:05:08.360 --rc geninfo_all_blocks=1 00:05:08.360 --rc geninfo_unexecuted_blocks=1 00:05:08.360 00:05:08.360 ' 00:05:08.360 13:58:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.360 13:58:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58335 00:05:08.360 13:58:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.360 13:58:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.360 13:58:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58335 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58335 ']' 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.360 13:58:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.360 [2024-12-09 13:58:10.080905] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:08.360 [2024-12-09 13:58:10.081028] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58335 ] 00:05:08.620 [2024-12-09 13:58:10.240424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.620 [2024-12-09 13:58:10.343314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.620 [2024-12-09 13:58:10.343947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.620 [2024-12-09 13:58:10.344156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.620 [2024-12-09 13:58:10.344175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.189 13:58:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.189 13:58:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:09.189 13:58:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.189 13:58:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.189 13:58:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.189 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.189 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.189 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.189 POWER: Cannot set governor of lcore 0 to performance 00:05:09.189 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.189 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.189 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.189 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.189 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:09.189 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:09.189 POWER: Unable to set Power Management Environment for lcore 0 00:05:09.189 [2024-12-09 13:58:10.929750] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:09.189 [2024-12-09 13:58:10.929770] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:09.189 [2024-12-09 13:58:10.929780] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.189 [2024-12-09 13:58:10.929796] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.189 [2024-12-09 13:58:10.929804] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.189 [2024-12-09 13:58:10.929814] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.189 13:58:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.189 13:58:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.189 13:58:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.189 13:58:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.449 [2024-12-09 13:58:11.159309] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.449 13:58:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.449 13:58:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.449 13:58:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.449 13:58:11 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.449 13:58:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.449 ************************************ 00:05:09.449 START TEST scheduler_create_thread 00:05:09.449 ************************************ 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.449 2 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.449 3 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.449 4 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.449 5 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.449 6 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.449 7 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.449 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.710 8 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.710 9 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.710 10 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.710 13:58:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.091 13:58:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.091 13:58:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:11.091 13:58:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:11.091 13:58:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.091 13:58:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.025 13:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.025 00:05:12.025 real 0m2.617s 00:05:12.025 user 0m0.015s 00:05:12.025 sys 0m0.005s 00:05:12.025 13:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.025 13:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.025 ************************************ 00:05:12.025 END TEST scheduler_create_thread 00:05:12.025 ************************************ 00:05:12.283 13:58:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:12.283 13:58:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58335 00:05:12.283 13:58:13 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58335 ']' 00:05:12.283 13:58:13 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58335 00:05:12.283 13:58:13 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:12.283 13:58:13 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.283 13:58:13 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58335 00:05:12.283 13:58:13 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:12.283 13:58:13 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:12.283 killing process with pid 58335 00:05:12.283 13:58:13 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58335' 00:05:12.283 13:58:13 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58335 00:05:12.283 13:58:13 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58335 00:05:12.541 [2024-12-09 13:58:14.274010] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:13.112 00:05:13.112 real 0m4.982s 00:05:13.112 user 0m8.777s 00:05:13.112 sys 0m0.333s 00:05:13.112 13:58:14 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.112 ************************************ 00:05:13.112 END TEST event_scheduler 00:05:13.112 ************************************ 00:05:13.112 13:58:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.373 13:58:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:13.373 13:58:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:13.373 13:58:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.373 13:58:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.373 13:58:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.373 ************************************ 00:05:13.373 START TEST app_repeat 00:05:13.373 ************************************ 00:05:13.373 13:58:14 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58436 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.373 Process app_repeat pid: 58436 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58436' 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.373 spdk_app_start Round 0 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:13.373 13:58:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58436 /var/tmp/spdk-nbd.sock 00:05:13.373 13:58:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58436 ']' 00:05:13.373 13:58:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.373 13:58:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.373 13:58:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.373 13:58:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.373 13:58:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.373 [2024-12-09 13:58:14.985461] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:13.373 [2024-12-09 13:58:14.985593] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58436 ] 00:05:13.373 [2024-12-09 13:58:15.144811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.634 [2024-12-09 13:58:15.246818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.634 [2024-12-09 13:58:15.246916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.201 13:58:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.201 13:58:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.201 13:58:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.462 Malloc0 00:05:14.462 13:58:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.724 Malloc1 00:05:14.724 13:58:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.724 13:58:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.986 /dev/nbd0 00:05:14.986 13:58:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.986 13:58:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.986 1+0 records in 00:05:14.986 1+0 records out 00:05:14.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404551 s, 10.1 MB/s 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.986 13:58:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.986 13:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.986 13:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.986 13:58:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.986 /dev/nbd1 00:05:15.248 13:58:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.248 13:58:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.248 1+0 records in 00:05:15.248 1+0 records out 00:05:15.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328928 s, 12.5 MB/s 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.248 13:58:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.248 13:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.248 13:58:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.248 13:58:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.248 13:58:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.248 13:58:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.248 { 00:05:15.248 "nbd_device": "/dev/nbd0", 00:05:15.248 "bdev_name": "Malloc0" 00:05:15.248 }, 00:05:15.248 { 00:05:15.248 "nbd_device": "/dev/nbd1", 00:05:15.248 "bdev_name": "Malloc1" 00:05:15.248 } 00:05:15.248 ]' 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.248 { 00:05:15.248 "nbd_device": "/dev/nbd0", 00:05:15.248 "bdev_name": "Malloc0" 00:05:15.248 }, 00:05:15.248 { 00:05:15.248 "nbd_device": "/dev/nbd1", 00:05:15.248 "bdev_name": "Malloc1" 00:05:15.248 } 00:05:15.248 ]' 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.248 /dev/nbd1' 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.248 /dev/nbd1' 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.248 13:58:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.511 256+0 records in 00:05:15.511 256+0 records out 00:05:15.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626044 s, 167 MB/s 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.511 256+0 records in 00:05:15.511 256+0 records out 00:05:15.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015683 s, 66.9 MB/s 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.511 256+0 records in 00:05:15.511 256+0 records out 00:05:15.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248484 s, 42.2 MB/s 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.511 13:58:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.773 13:58:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.032 13:58:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.032 13:58:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.290 13:58:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.226 [2024-12-09 13:58:18.699156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.226 [2024-12-09 13:58:18.785860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.226 [2024-12-09 13:58:18.785861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.226 [2024-12-09 13:58:18.888234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.226 [2024-12-09 13:58:18.888302] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.755 spdk_app_start Round 1 00:05:19.755 13:58:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.755 13:58:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:19.755 13:58:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58436 /var/tmp/spdk-nbd.sock 00:05:19.755 13:58:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58436 ']' 00:05:19.755 13:58:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.755 13:58:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.755 13:58:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.755 13:58:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.755 13:58:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.755 13:58:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.755 13:58:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.755 13:58:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.755 Malloc0 00:05:19.755 13:58:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.013 Malloc1 00:05:20.013 13:58:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.013 13:58:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.271 /dev/nbd0 00:05:20.271 13:58:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.271 13:58:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.271 1+0 records in 00:05:20.271 1+0 records out 00:05:20.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185993 s, 22.0 MB/s 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.271 13:58:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.271 13:58:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.271 13:58:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.271 13:58:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.529 /dev/nbd1 00:05:20.529 13:58:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.529 13:58:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.529 1+0 records in 00:05:20.529 1+0 records out 00:05:20.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221923 s, 18.5 MB/s 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.529 13:58:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.529 13:58:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.529 13:58:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.529 13:58:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.529 13:58:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.529 13:58:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.787 { 00:05:20.787 "nbd_device": "/dev/nbd0", 00:05:20.787 "bdev_name": "Malloc0" 00:05:20.787 }, 00:05:20.787 { 00:05:20.787 "nbd_device": "/dev/nbd1", 00:05:20.787 "bdev_name": "Malloc1" 00:05:20.787 } 00:05:20.787 ]' 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.787 { 00:05:20.787 "nbd_device": "/dev/nbd0", 00:05:20.787 "bdev_name": "Malloc0" 00:05:20.787 }, 00:05:20.787 { 00:05:20.787 "nbd_device": "/dev/nbd1", 00:05:20.787 "bdev_name": "Malloc1" 00:05:20.787 } 00:05:20.787 ]' 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.787 /dev/nbd1' 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.787 /dev/nbd1' 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.787 256+0 records in 00:05:20.787 256+0 records out 00:05:20.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0074103 s, 142 MB/s 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.787 256+0 records in 00:05:20.787 256+0 records out 00:05:20.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156962 s, 66.8 MB/s 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.787 256+0 records in 00:05:20.787 256+0 records out 00:05:20.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165557 s, 63.3 MB/s 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.787 13:58:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.046 13:58:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.046 13:58:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.046 13:58:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.046 13:58:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.046 13:58:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.046 13:58:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.046 13:58:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.046 13:58:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.046 13:58:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.046 13:58:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.319 13:58:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.588 13:58:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.588 13:58:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.847 13:58:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.413 [2024-12-09 13:58:24.035652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.413 [2024-12-09 13:58:24.109308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.413 [2024-12-09 13:58:24.109397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.672 [2024-12-09 13:58:24.209534] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.672 [2024-12-09 13:58:24.209589] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.199 13:58:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.199 spdk_app_start Round 2 00:05:25.199 13:58:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:25.199 13:58:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58436 /var/tmp/spdk-nbd.sock 00:05:25.199 13:58:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58436 ']' 00:05:25.199 13:58:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.199 13:58:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.199 13:58:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.199 13:58:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.199 13:58:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.199 13:58:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.199 13:58:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:25.199 13:58:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.199 Malloc0 00:05:25.199 13:58:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.457 Malloc1 00:05:25.457 13:58:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.457 13:58:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.715 /dev/nbd0 00:05:25.715 13:58:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.715 13:58:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.715 1+0 records in 00:05:25.715 1+0 records out 00:05:25.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272332 s, 15.0 MB/s 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.715 13:58:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.715 13:58:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.715 13:58:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.715 13:58:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.974 /dev/nbd1 00:05:25.974 13:58:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.974 13:58:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.974 1+0 records in 00:05:25.974 1+0 records out 00:05:25.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018301 s, 22.4 MB/s 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.974 13:58:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.974 13:58:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.974 13:58:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.974 13:58:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.974 13:58:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.974 13:58:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.233 { 00:05:26.233 "nbd_device": "/dev/nbd0", 00:05:26.233 "bdev_name": "Malloc0" 00:05:26.233 }, 00:05:26.233 { 00:05:26.233 "nbd_device": "/dev/nbd1", 00:05:26.233 "bdev_name": "Malloc1" 00:05:26.233 } 00:05:26.233 ]' 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.233 { 00:05:26.233 "nbd_device": "/dev/nbd0", 00:05:26.233 "bdev_name": "Malloc0" 00:05:26.233 }, 00:05:26.233 { 00:05:26.233 "nbd_device": "/dev/nbd1", 00:05:26.233 "bdev_name": "Malloc1" 00:05:26.233 } 00:05:26.233 ]' 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.233 /dev/nbd1' 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.233 /dev/nbd1' 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.233 256+0 records in 00:05:26.233 256+0 records out 00:05:26.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767841 s, 137 MB/s 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.233 13:58:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.234 256+0 records in 00:05:26.234 256+0 records out 00:05:26.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159414 s, 65.8 MB/s 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.234 256+0 records in 00:05:26.234 256+0 records out 00:05:26.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164804 s, 63.6 MB/s 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.234 13:58:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.492 13:58:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.492 13:58:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.492 13:58:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.492 13:58:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.492 13:58:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.492 13:58:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.492 13:58:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.492 13:58:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.492 13:58:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.492 13:58:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.750 13:58:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.008 13:58:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.008 13:58:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.266 13:58:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.831 [2024-12-09 13:58:29.439426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.831 [2024-12-09 13:58:29.509817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.831 [2024-12-09 13:58:29.509850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.831 [2024-12-09 13:58:29.614293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.831 [2024-12-09 13:58:29.614340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.360 13:58:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58436 /var/tmp/spdk-nbd.sock 00:05:30.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.360 13:58:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58436 ']' 00:05:30.360 13:58:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.360 13:58:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.360 13:58:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.360 13:58:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.360 13:58:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.360 13:58:32 event.app_repeat -- event/event.sh@39 -- # killprocess 58436 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58436 ']' 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58436 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58436 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.360 killing process with pid 58436 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58436' 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58436 00:05:30.360 13:58:32 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58436 00:05:30.928 spdk_app_start is called in Round 0. 00:05:30.928 Shutdown signal received, stop current app iteration 00:05:30.928 Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 reinitialization... 00:05:30.928 spdk_app_start is called in Round 1. 00:05:30.928 Shutdown signal received, stop current app iteration 00:05:30.928 Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 reinitialization... 00:05:30.928 spdk_app_start is called in Round 2. 00:05:30.928 Shutdown signal received, stop current app iteration 00:05:30.928 Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 reinitialization... 00:05:30.928 spdk_app_start is called in Round 3. 00:05:30.928 Shutdown signal received, stop current app iteration 00:05:30.928 13:58:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:30.928 13:58:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:30.928 00:05:30.928 real 0m17.700s 00:05:30.928 user 0m38.832s 00:05:30.928 sys 0m2.031s 00:05:30.928 13:58:32 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.928 13:58:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.928 ************************************ 00:05:30.928 END TEST app_repeat 00:05:30.928 ************************************ 00:05:30.928 13:58:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:30.928 13:58:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:30.928 13:58:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.928 13:58:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.928 13:58:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.928 ************************************ 00:05:30.928 START TEST cpu_locks 00:05:30.928 ************************************ 00:05:30.928 13:58:32 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.187 * Looking for test storage... 00:05:31.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.187 13:58:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.187 --rc genhtml_branch_coverage=1 00:05:31.187 --rc genhtml_function_coverage=1 00:05:31.187 --rc genhtml_legend=1 00:05:31.187 --rc geninfo_all_blocks=1 00:05:31.187 --rc geninfo_unexecuted_blocks=1 00:05:31.187 00:05:31.187 ' 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.187 --rc genhtml_branch_coverage=1 00:05:31.187 --rc genhtml_function_coverage=1 00:05:31.187 --rc genhtml_legend=1 00:05:31.187 --rc geninfo_all_blocks=1 00:05:31.187 --rc geninfo_unexecuted_blocks=1 00:05:31.187 00:05:31.187 ' 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.187 --rc genhtml_branch_coverage=1 00:05:31.187 --rc genhtml_function_coverage=1 00:05:31.187 --rc genhtml_legend=1 00:05:31.187 --rc geninfo_all_blocks=1 00:05:31.187 --rc geninfo_unexecuted_blocks=1 00:05:31.187 00:05:31.187 ' 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.187 --rc genhtml_branch_coverage=1 00:05:31.187 --rc genhtml_function_coverage=1 00:05:31.187 --rc genhtml_legend=1 00:05:31.187 --rc geninfo_all_blocks=1 00:05:31.187 --rc geninfo_unexecuted_blocks=1 00:05:31.187 00:05:31.187 ' 00:05:31.187 13:58:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.187 13:58:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.187 13:58:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.187 13:58:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.187 13:58:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.187 ************************************ 00:05:31.187 START TEST default_locks 00:05:31.187 ************************************ 00:05:31.187 13:58:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:31.187 13:58:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58872 00:05:31.187 13:58:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58872 00:05:31.187 13:58:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58872 ']' 00:05:31.187 13:58:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.187 13:58:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.187 13:58:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.187 13:58:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.187 13:58:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.187 13:58:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.187 [2024-12-09 13:58:32.922314] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:31.187 [2024-12-09 13:58:32.922432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58872 ] 00:05:31.446 [2024-12-09 13:58:33.073028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.446 [2024-12-09 13:58:33.150561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.012 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.012 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:32.012 13:58:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58872 00:05:32.012 13:58:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58872 00:05:32.012 13:58:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58872 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58872 ']' 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58872 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58872 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.270 killing process with pid 58872 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58872' 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58872 00:05:32.270 13:58:33 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58872 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58872 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58872 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58872 00:05:33.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58872 ']' 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.646 ERROR: process (pid: 58872) is no longer running 00:05:33.646 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58872) - No such process 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.646 00:05:33.646 real 0m2.231s 00:05:33.646 user 0m2.210s 00:05:33.646 sys 0m0.399s 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.646 13:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.646 ************************************ 00:05:33.646 END TEST default_locks 00:05:33.646 ************************************ 00:05:33.646 13:58:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:33.646 13:58:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.646 13:58:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.646 13:58:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.646 ************************************ 00:05:33.646 START TEST default_locks_via_rpc 00:05:33.646 ************************************ 00:05:33.646 13:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:33.646 13:58:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58925 00:05:33.646 13:58:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58925 00:05:33.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.646 13:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58925 ']' 00:05:33.646 13:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.646 13:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.646 13:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.646 13:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.646 13:58:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.646 13:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.646 [2024-12-09 13:58:35.214683] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:33.646 [2024-12-09 13:58:35.215313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58925 ] 00:05:33.646 [2024-12-09 13:58:35.370410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.904 [2024-12-09 13:58:35.449711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58925 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58925 00:05:34.470 13:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58925 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58925 ']' 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58925 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58925 00:05:34.728 killing process with pid 58925 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58925' 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58925 00:05:34.728 13:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58925 00:05:36.099 ************************************ 00:05:36.099 END TEST default_locks_via_rpc 00:05:36.099 ************************************ 00:05:36.099 00:05:36.099 real 0m2.376s 00:05:36.099 user 0m2.383s 00:05:36.099 sys 0m0.467s 00:05:36.099 13:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.099 13:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.099 13:58:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:36.099 13:58:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.099 13:58:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.099 13:58:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.099 ************************************ 00:05:36.099 START TEST non_locking_app_on_locked_coremask 00:05:36.099 ************************************ 00:05:36.099 13:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:36.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.099 13:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58977 00:05:36.099 13:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58977 /var/tmp/spdk.sock 00:05:36.099 13:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58977 ']' 00:05:36.099 13:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.099 13:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.099 13:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.099 13:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.099 13:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.099 13:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.099 [2024-12-09 13:58:37.661299] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:36.099 [2024-12-09 13:58:37.661415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:05:36.099 [2024-12-09 13:58:37.815790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.357 [2024-12-09 13:58:37.896268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58993 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58993 /var/tmp/spdk2.sock 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58993 ']' 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.923 13:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.923 [2024-12-09 13:58:38.566275] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:36.923 [2024-12-09 13:58:38.566394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58993 ] 00:05:37.180 [2024-12-09 13:58:38.730210] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.180 [2024-12-09 13:58:38.730252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.180 [2024-12-09 13:58:38.892028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.137 13:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.137 13:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.137 13:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58977 00:05:38.137 13:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.137 13:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58977 00:05:38.394 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58977 00:05:38.394 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58977 ']' 00:05:38.394 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58977 00:05:38.394 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:38.394 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.394 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58977 00:05:38.394 killing process with pid 58977 00:05:38.394 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.395 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.395 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58977' 00:05:38.395 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58977 00:05:38.395 13:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58977 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58993 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58993 ']' 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58993 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58993 00:05:40.921 killing process with pid 58993 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58993' 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58993 00:05:40.921 13:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58993 00:05:42.291 ************************************ 00:05:42.291 END TEST non_locking_app_on_locked_coremask 00:05:42.291 ************************************ 00:05:42.291 00:05:42.291 real 0m6.169s 00:05:42.291 user 0m6.426s 00:05:42.291 sys 0m0.817s 00:05:42.291 13:58:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.291 13:58:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.291 13:58:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:42.291 13:58:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.291 13:58:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.291 13:58:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.291 ************************************ 00:05:42.291 START TEST locking_app_on_unlocked_coremask 00:05:42.291 ************************************ 00:05:42.291 13:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:42.291 13:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59084 00:05:42.291 13:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59084 /var/tmp/spdk.sock 00:05:42.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.291 13:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59084 ']' 00:05:42.291 13:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.291 13:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.291 13:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.291 13:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:42.291 13:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.291 13:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.291 [2024-12-09 13:58:43.874825] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:42.291 [2024-12-09 13:58:43.875037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59084 ] 00:05:42.291 [2024-12-09 13:58:44.026549] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.291 [2024-12-09 13:58:44.026590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.549 [2024-12-09 13:58:44.110635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59100 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59100 /var/tmp/spdk2.sock 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59100 ']' 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.113 13:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.113 [2024-12-09 13:58:44.787828] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:43.113 [2024-12-09 13:58:44.788137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59100 ] 00:05:43.373 [2024-12-09 13:58:44.952094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.373 [2024-12-09 13:58:45.119234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.306 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.306 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.306 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59100 00:05:44.306 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59100 00:05:44.306 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59084 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59084 ']' 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59084 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59084 00:05:44.871 killing process with pid 59084 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59084' 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59084 00:05:44.871 13:58:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59084 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59100 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59100 ']' 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59100 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59100 00:05:47.395 killing process with pid 59100 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59100' 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59100 00:05:47.395 13:58:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59100 00:05:48.331 ************************************ 00:05:48.331 END TEST locking_app_on_unlocked_coremask 00:05:48.331 ************************************ 00:05:48.331 00:05:48.331 real 0m6.295s 00:05:48.331 user 0m6.580s 00:05:48.331 sys 0m0.802s 00:05:48.331 13:58:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.331 13:58:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.590 13:58:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:48.590 13:58:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.590 13:58:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.590 13:58:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.590 ************************************ 00:05:48.590 START TEST locking_app_on_locked_coremask 00:05:48.590 ************************************ 00:05:48.590 13:58:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:48.590 13:58:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59197 00:05:48.590 13:58:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59197 /var/tmp/spdk.sock 00:05:48.590 13:58:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59197 ']' 00:05:48.590 13:58:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.590 13:58:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.590 13:58:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.590 13:58:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.590 13:58:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.590 13:58:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.590 [2024-12-09 13:58:50.243334] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:48.590 [2024-12-09 13:58:50.243861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59197 ] 00:05:48.848 [2024-12-09 13:58:50.397056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.848 [2024-12-09 13:58:50.478758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59213 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59213 /var/tmp/spdk2.sock 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59213 /var/tmp/spdk2.sock 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:49.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59213 /var/tmp/spdk2.sock 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59213 ']' 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.414 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.672 [2024-12-09 13:58:51.244753] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:49.672 [2024-12-09 13:58:51.244871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59213 ] 00:05:49.672 [2024-12-09 13:58:51.409451] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59197 has claimed it. 00:05:49.672 [2024-12-09 13:58:51.409500] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.237 ERROR: process (pid: 59213) is no longer running 00:05:50.238 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59213) - No such process 00:05:50.238 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.238 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:50.238 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:50.238 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.238 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.238 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.238 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59197 00:05:50.238 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59197 00:05:50.238 13:58:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59197 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59197 ']' 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59197 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59197 00:05:50.496 killing process with pid 59197 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59197' 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59197 00:05:50.496 13:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59197 00:05:51.872 ************************************ 00:05:51.872 END TEST locking_app_on_locked_coremask 00:05:51.872 ************************************ 00:05:51.872 00:05:51.872 real 0m3.104s 00:05:51.872 user 0m3.430s 00:05:51.872 sys 0m0.522s 00:05:51.872 13:58:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.872 13:58:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.872 13:58:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:51.872 13:58:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.872 13:58:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.872 13:58:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.872 ************************************ 00:05:51.872 START TEST locking_overlapped_coremask 00:05:51.872 ************************************ 00:05:51.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.872 13:58:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:51.872 13:58:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59266 00:05:51.872 13:58:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59266 /var/tmp/spdk.sock 00:05:51.872 13:58:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59266 ']' 00:05:51.872 13:58:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.872 13:58:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.872 13:58:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.872 13:58:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.872 13:58:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:51.872 13:58:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.872 [2024-12-09 13:58:53.411867] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:51.873 [2024-12-09 13:58:53.412014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59266 ] 00:05:51.873 [2024-12-09 13:58:53.573016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.166 [2024-12-09 13:58:53.695718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.166 [2024-12-09 13:58:53.696027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.166 [2024-12-09 13:58:53.696107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59284 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59284 /var/tmp/spdk2.sock 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59284 /var/tmp/spdk2.sock 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59284 /var/tmp/spdk2.sock 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59284 ']' 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.733 13:58:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.733 [2024-12-09 13:58:54.407358] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:52.733 [2024-12-09 13:58:54.407601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59284 ] 00:05:52.990 [2024-12-09 13:58:54.580818] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59266 has claimed it. 00:05:52.990 [2024-12-09 13:58:54.580880] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.248 ERROR: process (pid: 59284) is no longer running 00:05:53.248 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59284) - No such process 00:05:53.248 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.248 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:53.248 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:53.248 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.248 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59266 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59266 ']' 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59266 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59266 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59266' 00:05:53.510 killing process with pid 59266 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59266 00:05:53.510 13:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59266 00:05:54.885 00:05:54.885 real 0m3.256s 00:05:54.885 user 0m8.761s 00:05:54.885 sys 0m0.479s 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.885 ************************************ 00:05:54.885 END TEST locking_overlapped_coremask 00:05:54.885 ************************************ 00:05:54.885 13:58:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:54.885 13:58:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.885 13:58:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.885 13:58:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.885 ************************************ 00:05:54.885 START TEST locking_overlapped_coremask_via_rpc 00:05:54.885 ************************************ 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59342 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59342 /var/tmp/spdk.sock 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59342 ']' 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.885 13:58:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.143 [2024-12-09 13:58:56.692836] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:55.143 [2024-12-09 13:58:56.693298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59342 ] 00:05:55.143 [2024-12-09 13:58:56.844554] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.143 [2024-12-09 13:58:56.844601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.403 [2024-12-09 13:58:56.949122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.403 [2024-12-09 13:58:56.949323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.403 [2024-12-09 13:58:56.949341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59360 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59360 /var/tmp/spdk2.sock 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59360 ']' 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.972 13:58:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.972 [2024-12-09 13:58:57.620771] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:55.972 [2024-12-09 13:58:57.620889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59360 ] 00:05:56.232 [2024-12-09 13:58:57.793763] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.232 [2024-12-09 13:58:57.793815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.232 [2024-12-09 13:58:58.002727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.232 [2024-12-09 13:58:58.005751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.232 [2024-12-09 13:58:58.005756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.138 [2024-12-09 13:58:59.464705] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59342 has claimed it. 00:05:58.138 request: 00:05:58.138 { 00:05:58.138 "method": "framework_enable_cpumask_locks", 00:05:58.138 "req_id": 1 00:05:58.138 } 00:05:58.138 Got JSON-RPC error response 00:05:58.138 response: 00:05:58.138 { 00:05:58.138 "code": -32603, 00:05:58.138 "message": "Failed to claim CPU core: 2" 00:05:58.138 } 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59342 /var/tmp/spdk.sock 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59342 ']' 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59360 /var/tmp/spdk2.sock 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59360 ']' 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.138 00:05:58.138 real 0m3.260s 00:05:58.138 user 0m1.081s 00:05:58.138 sys 0m0.119s 00:05:58.138 ************************************ 00:05:58.138 END TEST locking_overlapped_coremask_via_rpc 00:05:58.138 ************************************ 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.138 13:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.138 13:58:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:58.138 13:58:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59342 ]] 00:05:58.138 13:58:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59342 00:05:58.138 13:58:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59342 ']' 00:05:58.138 13:58:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59342 00:05:58.138 13:58:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:58.138 13:58:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.138 13:58:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59342 00:05:58.396 13:58:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.396 killing process with pid 59342 00:05:58.396 13:58:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.396 13:58:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59342' 00:05:58.396 13:58:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59342 00:05:58.396 13:58:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59342 00:05:59.330 13:59:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59360 ]] 00:05:59.588 13:59:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59360 00:05:59.588 13:59:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59360 ']' 00:05:59.588 13:59:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59360 00:05:59.588 13:59:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:59.588 13:59:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.588 13:59:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59360 00:05:59.588 killing process with pid 59360 00:05:59.588 13:59:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:59.588 13:59:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:59.588 13:59:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59360' 00:05:59.588 13:59:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59360 00:05:59.588 13:59:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59360 00:06:00.993 13:59:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.993 Process with pid 59342 is not found 00:06:00.993 Process with pid 59360 is not found 00:06:00.993 13:59:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:00.993 13:59:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59342 ]] 00:06:00.993 13:59:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59342 00:06:00.994 13:59:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59342 ']' 00:06:00.994 13:59:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59342 00:06:00.994 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59342) - No such process 00:06:00.994 13:59:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59342 is not found' 00:06:00.994 13:59:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59360 ]] 00:06:00.994 13:59:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59360 00:06:00.994 13:59:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59360 ']' 00:06:00.994 13:59:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59360 00:06:00.994 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59360) - No such process 00:06:00.994 13:59:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59360 is not found' 00:06:00.994 13:59:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.994 00:06:00.994 real 0m29.992s 00:06:00.994 user 0m53.523s 00:06:00.994 sys 0m4.477s 00:06:00.994 13:59:02 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.994 ************************************ 00:06:00.994 END TEST cpu_locks 00:06:00.994 ************************************ 00:06:00.994 13:59:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.994 ************************************ 00:06:00.994 END TEST event 00:06:00.994 ************************************ 00:06:00.994 00:06:00.994 real 0m57.469s 00:06:00.994 user 1m48.087s 00:06:00.994 sys 0m7.286s 00:06:00.994 13:59:02 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.994 13:59:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.994 13:59:02 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:00.994 13:59:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.994 13:59:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.994 13:59:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.994 ************************************ 00:06:00.994 START TEST thread 00:06:00.994 ************************************ 00:06:00.994 13:59:02 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:01.254 * Looking for test storage... 00:06:01.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:01.254 13:59:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.254 13:59:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.254 13:59:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.254 13:59:02 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.254 13:59:02 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.254 13:59:02 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.254 13:59:02 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.254 13:59:02 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.254 13:59:02 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.254 13:59:02 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.254 13:59:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.254 13:59:02 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:01.254 13:59:02 thread -- scripts/common.sh@345 -- # : 1 00:06:01.254 13:59:02 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.254 13:59:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.254 13:59:02 thread -- scripts/common.sh@365 -- # decimal 1 00:06:01.254 13:59:02 thread -- scripts/common.sh@353 -- # local d=1 00:06:01.254 13:59:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.254 13:59:02 thread -- scripts/common.sh@355 -- # echo 1 00:06:01.254 13:59:02 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.254 13:59:02 thread -- scripts/common.sh@366 -- # decimal 2 00:06:01.254 13:59:02 thread -- scripts/common.sh@353 -- # local d=2 00:06:01.254 13:59:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.254 13:59:02 thread -- scripts/common.sh@355 -- # echo 2 00:06:01.254 13:59:02 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.254 13:59:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.254 13:59:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.254 13:59:02 thread -- scripts/common.sh@368 -- # return 0 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:01.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.254 --rc genhtml_branch_coverage=1 00:06:01.254 --rc genhtml_function_coverage=1 00:06:01.254 --rc genhtml_legend=1 00:06:01.254 --rc geninfo_all_blocks=1 00:06:01.254 --rc geninfo_unexecuted_blocks=1 00:06:01.254 00:06:01.254 ' 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:01.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.254 --rc genhtml_branch_coverage=1 00:06:01.254 --rc genhtml_function_coverage=1 00:06:01.254 --rc genhtml_legend=1 00:06:01.254 --rc geninfo_all_blocks=1 00:06:01.254 --rc geninfo_unexecuted_blocks=1 00:06:01.254 00:06:01.254 ' 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:01.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.254 --rc genhtml_branch_coverage=1 00:06:01.254 --rc genhtml_function_coverage=1 00:06:01.254 --rc genhtml_legend=1 00:06:01.254 --rc geninfo_all_blocks=1 00:06:01.254 --rc geninfo_unexecuted_blocks=1 00:06:01.254 00:06:01.254 ' 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:01.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.254 --rc genhtml_branch_coverage=1 00:06:01.254 --rc genhtml_function_coverage=1 00:06:01.254 --rc genhtml_legend=1 00:06:01.254 --rc geninfo_all_blocks=1 00:06:01.254 --rc geninfo_unexecuted_blocks=1 00:06:01.254 00:06:01.254 ' 00:06:01.254 13:59:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.254 13:59:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.254 ************************************ 00:06:01.254 START TEST thread_poller_perf 00:06:01.254 ************************************ 00:06:01.254 13:59:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.254 [2024-12-09 13:59:02.944454] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:01.254 [2024-12-09 13:59:02.944655] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59520 ] 00:06:01.515 [2024-12-09 13:59:03.092340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.515 [2024-12-09 13:59:03.218514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.515 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.898 [2024-12-09T13:59:04.692Z] ====================================== 00:06:02.898 [2024-12-09T13:59:04.692Z] busy:2611489776 (cyc) 00:06:02.898 [2024-12-09T13:59:04.692Z] total_run_count: 304000 00:06:02.898 [2024-12-09T13:59:04.692Z] tsc_hz: 2600000000 (cyc) 00:06:02.898 [2024-12-09T13:59:04.692Z] ====================================== 00:06:02.898 [2024-12-09T13:59:04.692Z] poller_cost: 8590 (cyc), 3303 (nsec) 00:06:02.898 00:06:02.898 ************************************ 00:06:02.898 END TEST thread_poller_perf 00:06:02.898 ************************************ 00:06:02.898 real 0m1.483s 00:06:02.898 user 0m1.304s 00:06:02.898 sys 0m0.068s 00:06:02.898 13:59:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.898 13:59:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.898 13:59:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.898 13:59:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:02.898 13:59:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.898 13:59:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.898 ************************************ 00:06:02.898 START TEST thread_poller_perf 00:06:02.898 ************************************ 00:06:02.898 13:59:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.898 [2024-12-09 13:59:04.493219] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:02.898 [2024-12-09 13:59:04.493499] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59557 ] 00:06:02.898 [2024-12-09 13:59:04.651303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.158 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.158 [2024-12-09 13:59:04.769409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.544 [2024-12-09T13:59:06.338Z] ====================================== 00:06:04.544 [2024-12-09T13:59:06.338Z] busy:2603438190 (cyc) 00:06:04.544 [2024-12-09T13:59:06.338Z] total_run_count: 3594000 00:06:04.544 [2024-12-09T13:59:06.338Z] tsc_hz: 2600000000 (cyc) 00:06:04.544 [2024-12-09T13:59:06.338Z] ====================================== 00:06:04.544 [2024-12-09T13:59:06.338Z] poller_cost: 724 (cyc), 278 (nsec) 00:06:04.544 00:06:04.544 real 0m1.466s 00:06:04.544 user 0m1.294s 00:06:04.544 sys 0m0.063s 00:06:04.544 ************************************ 00:06:04.544 END TEST thread_poller_perf 00:06:04.544 ************************************ 00:06:04.544 13:59:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.544 13:59:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.544 13:59:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:04.544 ************************************ 00:06:04.544 END TEST thread 00:06:04.544 ************************************ 00:06:04.544 00:06:04.544 real 0m3.214s 00:06:04.544 user 0m2.711s 00:06:04.544 sys 0m0.249s 00:06:04.544 13:59:05 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.544 13:59:05 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.544 13:59:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:04.544 13:59:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:04.544 13:59:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.544 13:59:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.544 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:06:04.544 ************************************ 00:06:04.544 START TEST app_cmdline 00:06:04.544 ************************************ 00:06:04.544 13:59:06 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:04.544 * Looking for test storage... 00:06:04.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:04.544 13:59:06 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.544 13:59:06 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.544 13:59:06 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.544 13:59:06 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:04.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.544 13:59:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:04.544 13:59:06 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.544 13:59:06 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.544 --rc genhtml_branch_coverage=1 00:06:04.545 --rc genhtml_function_coverage=1 00:06:04.545 --rc genhtml_legend=1 00:06:04.545 --rc geninfo_all_blocks=1 00:06:04.545 --rc geninfo_unexecuted_blocks=1 00:06:04.545 00:06:04.545 ' 00:06:04.545 13:59:06 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.545 --rc genhtml_branch_coverage=1 00:06:04.545 --rc genhtml_function_coverage=1 00:06:04.545 --rc genhtml_legend=1 00:06:04.545 --rc geninfo_all_blocks=1 00:06:04.545 --rc geninfo_unexecuted_blocks=1 00:06:04.545 00:06:04.545 ' 00:06:04.545 13:59:06 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.545 --rc genhtml_branch_coverage=1 00:06:04.545 --rc genhtml_function_coverage=1 00:06:04.545 --rc genhtml_legend=1 00:06:04.545 --rc geninfo_all_blocks=1 00:06:04.545 --rc geninfo_unexecuted_blocks=1 00:06:04.545 00:06:04.545 ' 00:06:04.545 13:59:06 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.545 --rc genhtml_branch_coverage=1 00:06:04.545 --rc genhtml_function_coverage=1 00:06:04.545 --rc genhtml_legend=1 00:06:04.545 --rc geninfo_all_blocks=1 00:06:04.545 --rc geninfo_unexecuted_blocks=1 00:06:04.545 00:06:04.545 ' 00:06:04.545 13:59:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:04.545 13:59:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59640 00:06:04.545 13:59:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59640 00:06:04.545 13:59:06 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59640 ']' 00:06:04.545 13:59:06 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.545 13:59:06 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.545 13:59:06 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:04.545 13:59:06 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.545 13:59:06 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.545 13:59:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.545 [2024-12-09 13:59:06.279511] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:04.545 [2024-12-09 13:59:06.279895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59640 ] 00:06:04.803 [2024-12-09 13:59:06.444442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.803 [2024-12-09 13:59:06.577454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.740 13:59:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.740 13:59:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:05.740 13:59:07 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:05.740 { 00:06:05.740 "version": "SPDK v25.01-pre git sha1 3318278a6", 00:06:05.740 "fields": { 00:06:05.740 "major": 25, 00:06:05.740 "minor": 1, 00:06:05.740 "patch": 0, 00:06:05.740 "suffix": "-pre", 00:06:05.740 "commit": "3318278a6" 00:06:05.740 } 00:06:05.740 } 00:06:05.740 13:59:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:05.740 13:59:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:05.740 13:59:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:05.740 13:59:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:05.740 13:59:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:05.740 13:59:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.740 13:59:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.740 13:59:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:05.740 13:59:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:05.740 13:59:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.998 13:59:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:05.998 13:59:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:05.998 13:59:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.998 request: 00:06:05.998 { 00:06:05.998 "method": "env_dpdk_get_mem_stats", 00:06:05.998 "req_id": 1 00:06:05.998 } 00:06:05.998 Got JSON-RPC error response 00:06:05.998 response: 00:06:05.998 { 00:06:05.998 "code": -32601, 00:06:05.998 "message": "Method not found" 00:06:05.998 } 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.998 13:59:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59640 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59640 ']' 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59640 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.998 13:59:07 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59640 00:06:06.258 killing process with pid 59640 00:06:06.258 13:59:07 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.258 13:59:07 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.258 13:59:07 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59640' 00:06:06.258 13:59:07 app_cmdline -- common/autotest_common.sh@973 -- # kill 59640 00:06:06.258 13:59:07 app_cmdline -- common/autotest_common.sh@978 -- # wait 59640 00:06:08.170 00:06:08.170 real 0m3.461s 00:06:08.170 user 0m3.685s 00:06:08.170 sys 0m0.563s 00:06:08.170 13:59:09 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.170 ************************************ 00:06:08.170 END TEST app_cmdline 00:06:08.170 ************************************ 00:06:08.170 13:59:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.170 13:59:09 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:08.170 13:59:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.170 13:59:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.170 13:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:08.170 ************************************ 00:06:08.170 START TEST version 00:06:08.170 ************************************ 00:06:08.170 13:59:09 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:08.170 * Looking for test storage... 00:06:08.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:08.170 13:59:09 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.170 13:59:09 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.170 13:59:09 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.170 13:59:09 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.170 13:59:09 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.170 13:59:09 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.170 13:59:09 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.170 13:59:09 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.170 13:59:09 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.170 13:59:09 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.170 13:59:09 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.170 13:59:09 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.170 13:59:09 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.170 13:59:09 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.170 13:59:09 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.170 13:59:09 version -- scripts/common.sh@344 -- # case "$op" in 00:06:08.170 13:59:09 version -- scripts/common.sh@345 -- # : 1 00:06:08.170 13:59:09 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.170 13:59:09 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.170 13:59:09 version -- scripts/common.sh@365 -- # decimal 1 00:06:08.170 13:59:09 version -- scripts/common.sh@353 -- # local d=1 00:06:08.170 13:59:09 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.170 13:59:09 version -- scripts/common.sh@355 -- # echo 1 00:06:08.170 13:59:09 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.170 13:59:09 version -- scripts/common.sh@366 -- # decimal 2 00:06:08.170 13:59:09 version -- scripts/common.sh@353 -- # local d=2 00:06:08.170 13:59:09 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.170 13:59:09 version -- scripts/common.sh@355 -- # echo 2 00:06:08.170 13:59:09 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.170 13:59:09 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.170 13:59:09 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.170 13:59:09 version -- scripts/common.sh@368 -- # return 0 00:06:08.171 13:59:09 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.171 13:59:09 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.171 --rc genhtml_branch_coverage=1 00:06:08.171 --rc genhtml_function_coverage=1 00:06:08.171 --rc genhtml_legend=1 00:06:08.171 --rc geninfo_all_blocks=1 00:06:08.171 --rc geninfo_unexecuted_blocks=1 00:06:08.171 00:06:08.171 ' 00:06:08.171 13:59:09 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.171 --rc genhtml_branch_coverage=1 00:06:08.171 --rc genhtml_function_coverage=1 00:06:08.171 --rc genhtml_legend=1 00:06:08.171 --rc geninfo_all_blocks=1 00:06:08.171 --rc geninfo_unexecuted_blocks=1 00:06:08.171 00:06:08.171 ' 00:06:08.171 13:59:09 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.171 --rc genhtml_branch_coverage=1 00:06:08.171 --rc genhtml_function_coverage=1 00:06:08.171 --rc genhtml_legend=1 00:06:08.171 --rc geninfo_all_blocks=1 00:06:08.171 --rc geninfo_unexecuted_blocks=1 00:06:08.171 00:06:08.171 ' 00:06:08.171 13:59:09 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.171 --rc genhtml_branch_coverage=1 00:06:08.171 --rc genhtml_function_coverage=1 00:06:08.171 --rc genhtml_legend=1 00:06:08.171 --rc geninfo_all_blocks=1 00:06:08.171 --rc geninfo_unexecuted_blocks=1 00:06:08.171 00:06:08.171 ' 00:06:08.171 13:59:09 version -- app/version.sh@17 -- # get_header_version major 00:06:08.171 13:59:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.171 13:59:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.171 13:59:09 version -- app/version.sh@14 -- # cut -f2 00:06:08.171 13:59:09 version -- app/version.sh@17 -- # major=25 00:06:08.171 13:59:09 version -- app/version.sh@18 -- # get_header_version minor 00:06:08.171 13:59:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.171 13:59:09 version -- app/version.sh@14 -- # cut -f2 00:06:08.171 13:59:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.171 13:59:09 version -- app/version.sh@18 -- # minor=1 00:06:08.171 13:59:09 version -- app/version.sh@19 -- # get_header_version patch 00:06:08.171 13:59:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.171 13:59:09 version -- app/version.sh@14 -- # cut -f2 00:06:08.171 13:59:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.171 13:59:09 version -- app/version.sh@19 -- # patch=0 00:06:08.171 13:59:09 version -- app/version.sh@20 -- # get_header_version suffix 00:06:08.171 13:59:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.171 13:59:09 version -- app/version.sh@14 -- # cut -f2 00:06:08.171 13:59:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.171 13:59:09 version -- app/version.sh@20 -- # suffix=-pre 00:06:08.171 13:59:09 version -- app/version.sh@22 -- # version=25.1 00:06:08.171 13:59:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:08.171 13:59:09 version -- app/version.sh@28 -- # version=25.1rc0 00:06:08.171 13:59:09 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:08.171 13:59:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:08.171 13:59:09 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:08.171 13:59:09 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:08.171 00:06:08.171 real 0m0.211s 00:06:08.171 user 0m0.132s 00:06:08.171 sys 0m0.106s 00:06:08.171 ************************************ 00:06:08.171 END TEST version 00:06:08.171 ************************************ 00:06:08.171 13:59:09 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.171 13:59:09 version -- common/autotest_common.sh@10 -- # set +x 00:06:08.171 13:59:09 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:08.171 13:59:09 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:08.171 13:59:09 -- spdk/autotest.sh@194 -- # uname -s 00:06:08.171 13:59:09 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:08.171 13:59:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.171 13:59:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.171 13:59:09 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:08.171 13:59:09 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:08.171 13:59:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:08.171 13:59:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.171 13:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:08.171 ************************************ 00:06:08.171 START TEST blockdev_nvme 00:06:08.171 ************************************ 00:06:08.171 13:59:09 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:08.171 * Looking for test storage... 00:06:08.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:08.171 13:59:09 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.171 13:59:09 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.171 13:59:09 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.432 13:59:10 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.432 --rc genhtml_branch_coverage=1 00:06:08.432 --rc genhtml_function_coverage=1 00:06:08.432 --rc genhtml_legend=1 00:06:08.432 --rc geninfo_all_blocks=1 00:06:08.432 --rc geninfo_unexecuted_blocks=1 00:06:08.432 00:06:08.432 ' 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.432 --rc genhtml_branch_coverage=1 00:06:08.432 --rc genhtml_function_coverage=1 00:06:08.432 --rc genhtml_legend=1 00:06:08.432 --rc geninfo_all_blocks=1 00:06:08.432 --rc geninfo_unexecuted_blocks=1 00:06:08.432 00:06:08.432 ' 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.432 --rc genhtml_branch_coverage=1 00:06:08.432 --rc genhtml_function_coverage=1 00:06:08.432 --rc genhtml_legend=1 00:06:08.432 --rc geninfo_all_blocks=1 00:06:08.432 --rc geninfo_unexecuted_blocks=1 00:06:08.432 00:06:08.432 ' 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.432 --rc genhtml_branch_coverage=1 00:06:08.432 --rc genhtml_function_coverage=1 00:06:08.432 --rc genhtml_legend=1 00:06:08.432 --rc geninfo_all_blocks=1 00:06:08.432 --rc geninfo_unexecuted_blocks=1 00:06:08.432 00:06:08.432 ' 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:08.432 13:59:10 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59823 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:08.432 13:59:10 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59823 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59823 ']' 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.432 13:59:10 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.433 13:59:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.433 [2024-12-09 13:59:10.133481] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:08.433 [2024-12-09 13:59:10.133891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59823 ] 00:06:08.693 [2024-12-09 13:59:10.294688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.693 [2024-12-09 13:59:10.437717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.634 13:59:11 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.634 13:59:11 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:09.634 13:59:11 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:09.634 13:59:11 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:09.634 13:59:11 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:09.634 13:59:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:09.634 13:59:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:09.634 13:59:11 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:09.634 13:59:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.634 13:59:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.896 13:59:11 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.896 13:59:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:09.896 13:59:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.896 13:59:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.896 13:59:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.896 13:59:11 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:09.896 13:59:11 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:09.896 13:59:11 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.896 13:59:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.896 13:59:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:09.896 13:59:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:09.897 13:59:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fb151ce6-cc33-4536-8409-45c8b1da134d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fb151ce6-cc33-4536-8409-45c8b1da134d",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "dcd57675-f409-4b83-affa-6df75e1724d3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "dcd57675-f409-4b83-affa-6df75e1724d3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3ecacacc-5d3e-42f7-b42d-6efe28c6c4be"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3ecacacc-5d3e-42f7-b42d-6efe28c6c4be",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "391fb1ea-b9bf-4c91-af43-fe749b6aa328"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "391fb1ea-b9bf-4c91-af43-fe749b6aa328",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8e26437d-f81d-41b5-9109-1acee0e21e46"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8e26437d-f81d-41b5-9109-1acee0e21e46",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d044e132-395f-47c7-81ea-8a8d95ee1f5f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d044e132-395f-47c7-81ea-8a8d95ee1f5f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:09.897 13:59:11 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:09.897 13:59:11 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:09.897 13:59:11 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:09.897 13:59:11 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59823 00:06:09.897 13:59:11 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59823 ']' 00:06:09.897 13:59:11 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59823 00:06:09.897 13:59:11 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:09.897 13:59:11 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.897 13:59:11 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59823 00:06:09.897 killing process with pid 59823 00:06:09.897 13:59:11 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.897 13:59:11 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.897 13:59:11 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59823' 00:06:09.897 13:59:11 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59823 00:06:09.897 13:59:11 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59823 00:06:11.814 13:59:13 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:11.814 13:59:13 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:11.814 13:59:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:11.814 13:59:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.814 13:59:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.814 ************************************ 00:06:11.814 START TEST bdev_hello_world 00:06:11.814 ************************************ 00:06:11.814 13:59:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:11.814 [2024-12-09 13:59:13.471499] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:11.815 [2024-12-09 13:59:13.471668] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59907 ] 00:06:12.076 [2024-12-09 13:59:13.639936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.076 [2024-12-09 13:59:13.779232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.648 [2024-12-09 13:59:14.386011] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:12.648 [2024-12-09 13:59:14.386083] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:12.648 [2024-12-09 13:59:14.386112] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:12.648 [2024-12-09 13:59:14.388958] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:12.648 [2024-12-09 13:59:14.390339] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:12.648 [2024-12-09 13:59:14.390408] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:12.648 [2024-12-09 13:59:14.390894] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:12.648 00:06:12.648 [2024-12-09 13:59:14.390921] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:13.593 00:06:13.593 real 0m1.808s 00:06:13.593 user 0m1.431s 00:06:13.593 sys 0m0.263s 00:06:13.593 ************************************ 00:06:13.593 END TEST bdev_hello_world 00:06:13.593 ************************************ 00:06:13.593 13:59:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.593 13:59:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:13.593 13:59:15 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:13.593 13:59:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.593 13:59:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.593 13:59:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.593 ************************************ 00:06:13.593 START TEST bdev_bounds 00:06:13.593 ************************************ 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:13.593 Process bdevio pid: 59949 00:06:13.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59949 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59949' 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59949 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59949 ']' 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.593 13:59:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:13.593 [2024-12-09 13:59:15.350330] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:13.593 [2024-12-09 13:59:15.350495] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59949 ] 00:06:13.855 [2024-12-09 13:59:15.514289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.117 [2024-12-09 13:59:15.657674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.117 [2024-12-09 13:59:15.658149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.117 [2024-12-09 13:59:15.657990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.690 13:59:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.690 13:59:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:14.690 13:59:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:14.690 I/O targets: 00:06:14.690 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:14.690 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:14.690 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.691 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.691 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.691 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:14.691 00:06:14.691 00:06:14.691 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.691 http://cunit.sourceforge.net/ 00:06:14.691 00:06:14.691 00:06:14.691 Suite: bdevio tests on: Nvme3n1 00:06:14.691 Test: blockdev write read block ...passed 00:06:14.691 Test: blockdev write zeroes read block ...passed 00:06:14.691 Test: blockdev write zeroes read no split ...passed 00:06:14.691 Test: blockdev write zeroes read split ...passed 00:06:14.691 Test: blockdev write zeroes read split partial ...passed 00:06:14.691 Test: blockdev reset ...[2024-12-09 13:59:16.454746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:14.691 [2024-12-09 13:59:16.460031] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spasseduccessful. 00:06:14.691 00:06:14.691 Test: blockdev write read 8 blocks ...passed 00:06:14.691 Test: blockdev write read size > 128k ...passed 00:06:14.691 Test: blockdev write read invalid size ...passed 00:06:14.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.691 Test: blockdev write read max offset ...passed 00:06:14.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.691 Test: blockdev writev readv 8 blocks ...passed 00:06:14.691 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.691 Test: blockdev writev readv block ...passed 00:06:14.691 Test: blockdev writev readv size > 128k ...passed 00:06:14.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.974 Test: blockdev comparev and writev ...[2024-12-09 13:59:16.484206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b400a000 len:0x1000 00:06:14.975 [2024-12-09 13:59:16.484282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.975 passed 00:06:14.975 Test: blockdev nvme passthru rw ...passed 00:06:14.975 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.975 Test: blockdev nvme admin passthru ...[2024-12-09 13:59:16.487604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.975 [2024-12-09 13:59:16.487659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.975 passed 00:06:14.975 Test: blockdev copy ...passed 00:06:14.975 Suite: bdevio tests on: Nvme2n3 00:06:14.975 Test: blockdev write read block ...passed 00:06:14.975 Test: blockdev write zeroes read block ...passed 00:06:14.975 Test: blockdev write zeroes read no split ...passed 00:06:14.975 Test: blockdev write zeroes read split ...passed 00:06:14.975 Test: blockdev write zeroes read split partial ...passed 00:06:14.975 Test: blockdev reset ...[2024-12-09 13:59:16.586840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:14.975 [2024-12-09 13:59:16.591092] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:14.975 passed 00:06:14.975 Test: blockdev write read 8 blocks ...passed 00:06:14.975 Test: blockdev write read size > 128k ...passed 00:06:14.975 Test: blockdev write read invalid size ...passed 00:06:14.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.975 Test: blockdev write read max offset ...passed 00:06:14.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.975 Test: blockdev writev readv 8 blocks ...passed 00:06:14.975 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.975 Test: blockdev writev readv block ...passed 00:06:14.975 Test: blockdev writev readv size > 128k ...passed 00:06:14.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.975 Test: blockdev comparev and writev ...[2024-12-09 13:59:16.614670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x297206000 len:0x1000 00:06:14.975 [2024-12-09 13:59:16.614741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.975 passed 00:06:14.975 Test: blockdev nvme passthru rw ...passed 00:06:14.975 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.975 Test: blockdev nvme admin passthru ...[2024-12-09 13:59:16.617347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.975 [2024-12-09 13:59:16.617399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.975 passed 00:06:14.975 Test: blockdev copy ...passed 00:06:14.975 Suite: bdevio tests on: Nvme2n2 00:06:14.975 Test: blockdev write read block ...passed 00:06:14.975 Test: blockdev write zeroes read block ...passed 00:06:14.975 Test: blockdev write zeroes read no split ...passed 00:06:14.975 Test: blockdev write zeroes read split ...passed 00:06:14.975 Test: blockdev write zeroes read split partial ...passed 00:06:14.975 Test: blockdev reset ...[2024-12-09 13:59:16.689868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:14.975 [2024-12-09 13:59:16.696729] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:14.975 Test: blockdev write read 8 blocks ...uccessful. 00:06:14.975 passed 00:06:14.975 Test: blockdev write read size > 128k ...passed 00:06:14.975 Test: blockdev write read invalid size ...passed 00:06:14.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.975 Test: blockdev write read max offset ...passed 00:06:14.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.975 Test: blockdev writev readv 8 blocks ...passed 00:06:14.975 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.975 Test: blockdev writev readv block ...passed 00:06:14.975 Test: blockdev writev readv size > 128k ...passed 00:06:14.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.975 Test: blockdev comparev and writev ...[2024-12-09 13:59:16.719834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1c3c000 len:0x1000 00:06:14.975 [2024-12-09 13:59:16.719903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.975 passed 00:06:14.975 Test: blockdev nvme passthru rw ...passed 00:06:14.975 Test: blockdev nvme passthru vendor specific ...[2024-12-09 13:59:16.723024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:14.975 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:14.975 [2024-12-09 13:59:16.723203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.975 passed 00:06:14.975 Test: blockdev copy ...passed 00:06:14.975 Suite: bdevio tests on: Nvme2n1 00:06:14.975 Test: blockdev write read block ...passed 00:06:14.975 Test: blockdev write zeroes read block ...passed 00:06:14.975 Test: blockdev write zeroes read no split ...passed 00:06:15.279 Test: blockdev write zeroes read split ...passed 00:06:15.279 Test: blockdev write zeroes read split partial ...passed 00:06:15.279 Test: blockdev reset ...[2024-12-09 13:59:16.794602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:15.279 [2024-12-09 13:59:16.800758] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:15.279 Test: blockdev write read 8 blocks ...uccessful. 00:06:15.279 passed 00:06:15.279 Test: blockdev write read size > 128k ...passed 00:06:15.279 Test: blockdev write read invalid size ...passed 00:06:15.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.279 Test: blockdev write read max offset ...passed 00:06:15.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.279 Test: blockdev writev readv 8 blocks ...passed 00:06:15.279 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.279 Test: blockdev writev readv block ...passed 00:06:15.279 Test: blockdev writev readv size > 128k ...passed 00:06:15.279 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.279 Test: blockdev comparev and writev ...[2024-12-09 13:59:16.824787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1c38000 len:0x1000 00:06:15.279 [2024-12-09 13:59:16.824995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.279 passed 00:06:15.279 Test: blockdev nvme passthru rw ...passed 00:06:15.279 Test: blockdev nvme passthru vendor specific ...passed 00:06:15.279 Test: blockdev nvme admin passthru ...[2024-12-09 13:59:16.828220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.279 [2024-12-09 13:59:16.828269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.279 passed 00:06:15.279 Test: blockdev copy ...passed 00:06:15.279 Suite: bdevio tests on: Nvme1n1 00:06:15.279 Test: blockdev write read block ...passed 00:06:15.279 Test: blockdev write zeroes read block ...passed 00:06:15.279 Test: blockdev write zeroes read no split ...passed 00:06:15.279 Test: blockdev write zeroes read split ...passed 00:06:15.279 Test: blockdev write zeroes read split partial ...passed 00:06:15.279 Test: blockdev reset ...[2024-12-09 13:59:17.057146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:15.279 [2024-12-09 13:59:17.061789] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:15.279 passed 00:06:15.279 Test: blockdev write read 8 blocks ...passed 00:06:15.279 Test: blockdev write read size > 128k ...passed 00:06:15.279 Test: blockdev write read invalid size ...passed 00:06:15.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.279 Test: blockdev write read max offset ...passed 00:06:15.541 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.541 Test: blockdev writev readv 8 blocks ...passed 00:06:15.541 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.541 Test: blockdev writev readv block ...passed 00:06:15.541 Test: blockdev writev readv size > 128k ...passed 00:06:15.541 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.541 Test: blockdev comparev and writev ...[2024-12-09 13:59:17.086135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1c34000 len:0x1000 00:06:15.541 [2024-12-09 13:59:17.086212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.541 passed 00:06:15.541 Test: blockdev nvme passthru rw ...passed 00:06:15.541 Test: blockdev nvme passthru vendor specific ...[2024-12-09 13:59:17.088915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:15.541 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:15.541 [2024-12-09 13:59:17.089118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.541 passed 00:06:15.541 Test: blockdev copy ...passed 00:06:15.541 Suite: bdevio tests on: Nvme0n1 00:06:15.541 Test: blockdev write read block ...passed 00:06:15.541 Test: blockdev write zeroes read block ...passed 00:06:15.541 Test: blockdev write zeroes read no split ...passed 00:06:15.541 Test: blockdev write zeroes read split ...passed 00:06:15.541 Test: blockdev write zeroes read split partial ...passed 00:06:15.541 Test: blockdev reset ...[2024-12-09 13:59:17.158450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:15.541 [2024-12-09 13:59:17.163391] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:15.541 Test: blockdev write read 8 blocks ...uccessful. 00:06:15.541 passed 00:06:15.541 Test: blockdev write read size > 128k ...passed 00:06:15.541 Test: blockdev write read invalid size ...passed 00:06:15.541 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.541 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.541 Test: blockdev write read max offset ...passed 00:06:15.541 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.541 Test: blockdev writev readv 8 blocks ...passed 00:06:15.541 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.541 Test: blockdev writev readv block ...passed 00:06:15.541 Test: blockdev writev readv size > 128k ...passed 00:06:15.541 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.541 Test: blockdev comparev and writev ...passed 00:06:15.541 Test: blockdev nvme passthru rw ...[2024-12-09 13:59:17.184075] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:15.541 separate metadata which is not supported yet. 00:06:15.541 passed 00:06:15.541 Test: blockdev nvme passthru vendor specific ...passed 00:06:15.541 Test: blockdev nvme admin passthru ...[2024-12-09 13:59:17.186044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:15.541 [2024-12-09 13:59:17.186110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:15.541 passed 00:06:15.541 Test: blockdev copy ...passed 00:06:15.541 00:06:15.541 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.541 suites 6 6 n/a 0 0 00:06:15.541 tests 138 138 138 0 0 00:06:15.541 asserts 893 893 893 0 n/a 00:06:15.541 00:06:15.541 Elapsed time = 1.876 seconds 00:06:15.541 0 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59949 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59949 ']' 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59949 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59949 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.541 killing process with pid 59949 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59949' 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59949 00:06:15.541 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59949 00:06:16.487 13:59:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:16.487 00:06:16.487 real 0m2.722s 00:06:16.487 user 0m6.694s 00:06:16.487 sys 0m0.370s 00:06:16.487 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.487 13:59:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:16.487 ************************************ 00:06:16.487 END TEST bdev_bounds 00:06:16.487 ************************************ 00:06:16.487 13:59:18 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:16.487 13:59:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:16.487 13:59:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.487 13:59:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.487 ************************************ 00:06:16.487 START TEST bdev_nbd 00:06:16.487 ************************************ 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:16.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60009 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60009 /var/tmp/spdk-nbd.sock 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60009 ']' 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:16.487 13:59:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:16.488 [2024-12-09 13:59:18.152700] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:16.488 [2024-12-09 13:59:18.153054] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.749 [2024-12-09 13:59:18.318598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.749 [2024-12-09 13:59:18.457930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.323 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.323 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:17.323 13:59:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.324 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.586 1+0 records in 00:06:17.586 1+0 records out 00:06:17.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572154 s, 7.2 MB/s 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.586 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.848 1+0 records in 00:06:17.848 1+0 records out 00:06:17.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120319 s, 3.4 MB/s 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.848 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.109 1+0 records in 00:06:18.109 1+0 records out 00:06:18.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00144786 s, 2.8 MB/s 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.109 13:59:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.370 1+0 records in 00:06:18.370 1+0 records out 00:06:18.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000946486 s, 4.3 MB/s 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.370 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.631 1+0 records in 00:06:18.631 1+0 records out 00:06:18.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139481 s, 2.9 MB/s 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.631 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.893 1+0 records in 00:06:18.893 1+0 records out 00:06:18.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125939 s, 3.3 MB/s 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.893 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd0", 00:06:19.153 "bdev_name": "Nvme0n1" 00:06:19.153 }, 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd1", 00:06:19.153 "bdev_name": "Nvme1n1" 00:06:19.153 }, 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd2", 00:06:19.153 "bdev_name": "Nvme2n1" 00:06:19.153 }, 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd3", 00:06:19.153 "bdev_name": "Nvme2n2" 00:06:19.153 }, 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd4", 00:06:19.153 "bdev_name": "Nvme2n3" 00:06:19.153 }, 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd5", 00:06:19.153 "bdev_name": "Nvme3n1" 00:06:19.153 } 00:06:19.153 ]' 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd0", 00:06:19.153 "bdev_name": "Nvme0n1" 00:06:19.153 }, 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd1", 00:06:19.153 "bdev_name": "Nvme1n1" 00:06:19.153 }, 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd2", 00:06:19.153 "bdev_name": "Nvme2n1" 00:06:19.153 }, 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd3", 00:06:19.153 "bdev_name": "Nvme2n2" 00:06:19.153 }, 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd4", 00:06:19.153 "bdev_name": "Nvme2n3" 00:06:19.153 }, 00:06:19.153 { 00:06:19.153 "nbd_device": "/dev/nbd5", 00:06:19.153 "bdev_name": "Nvme3n1" 00:06:19.153 } 00:06:19.153 ]' 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.153 13:59:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.415 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.415 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.415 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.415 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.415 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.415 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.415 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.415 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.415 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.415 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.676 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.676 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.676 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.676 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.676 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.676 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.676 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.676 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.676 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.676 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:19.938 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:19.938 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:19.938 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:19.938 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.938 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.938 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:19.938 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.938 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.938 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.938 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:20.200 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:20.200 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:20.200 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:20.200 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.200 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.200 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:20.200 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.200 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.200 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.200 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:20.462 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:20.462 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:20.462 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:20.462 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.462 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.462 13:59:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.462 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.724 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:20.986 /dev/nbd0 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.986 1+0 records in 00:06:20.986 1+0 records out 00:06:20.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00158109 s, 2.6 MB/s 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.986 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.248 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.248 13:59:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.248 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.248 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.248 13:59:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:21.248 /dev/nbd1 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.248 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.248 1+0 records in 00:06:21.248 1+0 records out 00:06:21.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111171 s, 3.7 MB/s 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:21.511 /dev/nbd10 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.511 1+0 records in 00:06:21.511 1+0 records out 00:06:21.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133145 s, 3.1 MB/s 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.511 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:21.774 /dev/nbd11 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.774 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.774 1+0 records in 00:06:21.774 1+0 records out 00:06:21.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00141262 s, 2.9 MB/s 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:22.037 /dev/nbd12 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.037 1+0 records in 00:06:22.037 1+0 records out 00:06:22.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000955796 s, 4.3 MB/s 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.037 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.344 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.344 13:59:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.344 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.344 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:22.344 13:59:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:22.344 /dev/nbd13 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.344 1+0 records in 00:06:22.344 1+0 records out 00:06:22.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0018658 s, 2.2 MB/s 00:06:22.344 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.631 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.631 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.631 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.631 13:59:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.631 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.631 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd0", 00:06:22.632 "bdev_name": "Nvme0n1" 00:06:22.632 }, 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd1", 00:06:22.632 "bdev_name": "Nvme1n1" 00:06:22.632 }, 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd10", 00:06:22.632 "bdev_name": "Nvme2n1" 00:06:22.632 }, 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd11", 00:06:22.632 "bdev_name": "Nvme2n2" 00:06:22.632 }, 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd12", 00:06:22.632 "bdev_name": "Nvme2n3" 00:06:22.632 }, 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd13", 00:06:22.632 "bdev_name": "Nvme3n1" 00:06:22.632 } 00:06:22.632 ]' 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd0", 00:06:22.632 "bdev_name": "Nvme0n1" 00:06:22.632 }, 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd1", 00:06:22.632 "bdev_name": "Nvme1n1" 00:06:22.632 }, 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd10", 00:06:22.632 "bdev_name": "Nvme2n1" 00:06:22.632 }, 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd11", 00:06:22.632 "bdev_name": "Nvme2n2" 00:06:22.632 }, 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd12", 00:06:22.632 "bdev_name": "Nvme2n3" 00:06:22.632 }, 00:06:22.632 { 00:06:22.632 "nbd_device": "/dev/nbd13", 00:06:22.632 "bdev_name": "Nvme3n1" 00:06:22.632 } 00:06:22.632 ]' 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.632 /dev/nbd1 00:06:22.632 /dev/nbd10 00:06:22.632 /dev/nbd11 00:06:22.632 /dev/nbd12 00:06:22.632 /dev/nbd13' 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.632 /dev/nbd1 00:06:22.632 /dev/nbd10 00:06:22.632 /dev/nbd11 00:06:22.632 /dev/nbd12 00:06:22.632 /dev/nbd13' 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:22.632 256+0 records in 00:06:22.632 256+0 records out 00:06:22.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00638987 s, 164 MB/s 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.632 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.895 256+0 records in 00:06:22.895 256+0 records out 00:06:22.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.266333 s, 3.9 MB/s 00:06:22.895 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.895 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.157 256+0 records in 00:06:23.157 256+0 records out 00:06:23.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241951 s, 4.3 MB/s 00:06:23.157 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.157 13:59:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:23.419 256+0 records in 00:06:23.419 256+0 records out 00:06:23.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.299153 s, 3.5 MB/s 00:06:23.419 13:59:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.419 13:59:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:23.994 256+0 records in 00:06:23.994 256+0 records out 00:06:23.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.291169 s, 3.6 MB/s 00:06:23.994 13:59:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.994 13:59:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:23.994 256+0 records in 00:06:23.994 256+0 records out 00:06:23.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.283775 s, 3.7 MB/s 00:06:23.994 13:59:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.994 13:59:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:24.568 256+0 records in 00:06:24.568 256+0 records out 00:06:24.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.28626 s, 3.7 MB/s 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.568 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.830 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:25.089 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:25.089 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:25.089 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:25.089 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.089 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.089 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:25.089 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.089 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.089 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.089 13:59:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:25.351 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:25.351 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:25.351 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:25.351 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.351 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.351 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:25.351 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.351 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.351 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.351 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:25.613 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:25.613 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:25.613 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:25.613 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.613 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.613 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:25.613 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.613 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.613 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.613 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.875 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:26.138 13:59:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:26.400 malloc_lvol_verify 00:06:26.400 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:26.662 67c034ae-ac9e-4009-ab85-e807bc5ced4f 00:06:26.662 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:26.923 a48ac5e7-e793-4b6d-97d6-46a9aec7727c 00:06:26.923 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:26.923 /dev/nbd0 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:27.186 mke2fs 1.47.0 (5-Feb-2023) 00:06:27.186 Discarding device blocks: 0/4096 done 00:06:27.186 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:27.186 00:06:27.186 Allocating group tables: 0/1 done 00:06:27.186 Writing inode tables: 0/1 done 00:06:27.186 Creating journal (1024 blocks): done 00:06:27.186 Writing superblocks and filesystem accounting information: 0/1 done 00:06:27.186 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.186 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60009 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60009 ']' 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60009 00:06:27.448 13:59:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:27.448 13:59:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.448 13:59:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60009 00:06:27.448 13:59:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.448 13:59:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.448 killing process with pid 60009 00:06:27.448 13:59:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60009' 00:06:27.448 13:59:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60009 00:06:27.448 13:59:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60009 00:06:28.397 13:59:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:28.397 00:06:28.397 real 0m11.808s 00:06:28.397 user 0m15.896s 00:06:28.397 sys 0m3.888s 00:06:28.397 13:59:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.397 ************************************ 00:06:28.397 END TEST bdev_nbd 00:06:28.397 ************************************ 00:06:28.397 13:59:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:28.397 skipping fio tests on NVMe due to multi-ns failures. 00:06:28.397 13:59:29 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:28.397 13:59:29 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:28.397 13:59:29 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:28.397 13:59:29 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:28.397 13:59:29 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:28.397 13:59:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:28.397 13:59:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.397 13:59:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:28.397 ************************************ 00:06:28.397 START TEST bdev_verify 00:06:28.397 ************************************ 00:06:28.397 13:59:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:28.397 [2024-12-09 13:59:30.029379] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:28.397 [2024-12-09 13:59:30.029611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60408 ] 00:06:28.658 [2024-12-09 13:59:30.195112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.658 [2024-12-09 13:59:30.334604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.658 [2024-12-09 13:59:30.334621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.231 Running I/O for 5 seconds... 00:06:31.574 17344.00 IOPS, 67.75 MiB/s [2024-12-09T13:59:34.311Z] 17984.00 IOPS, 70.25 MiB/s [2024-12-09T13:59:35.253Z] 17941.33 IOPS, 70.08 MiB/s [2024-12-09T13:59:36.199Z] 18224.00 IOPS, 71.19 MiB/s [2024-12-09T13:59:36.199Z] 18188.80 IOPS, 71.05 MiB/s 00:06:34.405 Latency(us) 00:06:34.405 [2024-12-09T13:59:36.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:34.405 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0x0 length 0xbd0bd 00:06:34.405 Nvme0n1 : 5.08 1486.98 5.81 0.00 0.00 85907.28 19862.45 104857.60 00:06:34.405 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:34.405 Nvme0n1 : 5.07 1514.81 5.92 0.00 0.00 84320.20 18955.03 82272.89 00:06:34.405 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0x0 length 0xa0000 00:06:34.405 Nvme1n1 : 5.08 1486.59 5.81 0.00 0.00 85841.12 18551.73 100421.32 00:06:34.405 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0xa0000 length 0xa0000 00:06:34.405 Nvme1n1 : 5.07 1514.35 5.92 0.00 0.00 83988.43 19156.68 70173.93 00:06:34.405 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0x0 length 0x80000 00:06:34.405 Nvme2n1 : 5.08 1485.65 5.80 0.00 0.00 85602.12 16837.71 83886.08 00:06:34.405 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0x80000 length 0x80000 00:06:34.405 Nvme2n1 : 5.07 1513.89 5.91 0.00 0.00 83799.23 17543.48 68560.74 00:06:34.405 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0x0 length 0x80000 00:06:34.405 Nvme2n2 : 5.08 1485.25 5.80 0.00 0.00 85417.02 15022.87 79046.50 00:06:34.405 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0x80000 length 0x80000 00:06:34.405 Nvme2n2 : 5.07 1513.45 5.91 0.00 0.00 83715.36 17543.48 68157.44 00:06:34.405 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0x0 length 0x80000 00:06:34.405 Nvme2n3 : 5.09 1484.82 5.80 0.00 0.00 85049.73 14619.57 70980.53 00:06:34.405 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0x80000 length 0x80000 00:06:34.405 Nvme2n3 : 5.08 1513.00 5.91 0.00 0.00 83584.43 17442.66 72190.42 00:06:34.405 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0x0 length 0x20000 00:06:34.405 Nvme3n1 : 5.09 1484.41 5.80 0.00 0.00 84821.82 14720.39 79046.50 00:06:34.405 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:34.405 Verification LBA range: start 0x20000 length 0x20000 00:06:34.405 Nvme3n1 : 5.08 1512.54 5.91 0.00 0.00 83471.21 12048.54 75013.51 00:06:34.405 [2024-12-09T13:59:36.199Z] =================================================================================================================== 00:06:34.405 [2024-12-09T13:59:36.199Z] Total : 17995.75 70.30 0.00 0.00 84619.66 12048.54 104857.60 00:06:35.793 00:06:35.793 real 0m7.322s 00:06:35.793 user 0m13.494s 00:06:35.793 sys 0m0.321s 00:06:35.793 13:59:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.793 ************************************ 00:06:35.793 END TEST bdev_verify 00:06:35.793 ************************************ 00:06:35.793 13:59:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:35.793 13:59:37 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:35.793 13:59:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:35.793 13:59:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.793 13:59:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.793 ************************************ 00:06:35.793 START TEST bdev_verify_big_io 00:06:35.793 ************************************ 00:06:35.793 13:59:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:35.793 [2024-12-09 13:59:37.427103] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:35.793 [2024-12-09 13:59:37.427256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60503 ] 00:06:36.056 [2024-12-09 13:59:37.588968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.056 [2024-12-09 13:59:37.730424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.056 [2024-12-09 13:59:37.730587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.997 Running I/O for 5 seconds... 00:06:40.993 0.00 IOPS, 0.00 MiB/s [2024-12-09T13:59:44.710Z] 1335.00 IOPS, 83.44 MiB/s [2024-12-09T13:59:44.710Z] 1918.67 IOPS, 119.92 MiB/s 00:06:42.916 Latency(us) 00:06:42.916 [2024-12-09T13:59:44.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.916 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0x0 length 0xbd0b 00:06:42.916 Nvme0n1 : 5.68 112.63 7.04 0.00 0.00 1093862.48 29239.14 1025991.29 00:06:42.916 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:42.916 Nvme0n1 : 5.76 116.00 7.25 0.00 0.00 1056357.76 23391.31 1038896.84 00:06:42.916 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0x0 length 0xa000 00:06:42.916 Nvme1n1 : 5.77 115.55 7.22 0.00 0.00 1033858.02 87112.47 942105.21 00:06:42.916 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0xa000 length 0xa000 00:06:42.916 Nvme1n1 : 5.82 121.05 7.57 0.00 0.00 995706.38 54041.99 871124.68 00:06:42.916 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0x0 length 0x8000 00:06:42.916 Nvme2n1 : 5.81 121.09 7.57 0.00 0.00 970415.73 39119.95 942105.21 00:06:42.916 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0x8000 length 0x8000 00:06:42.916 Nvme2n1 : 5.82 121.01 7.56 0.00 0.00 962346.03 54848.59 896935.78 00:06:42.916 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0x0 length 0x8000 00:06:42.916 Nvme2n2 : 5.82 121.03 7.56 0.00 0.00 939896.23 40531.50 948557.98 00:06:42.916 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0x8000 length 0x8000 00:06:42.916 Nvme2n2 : 5.92 125.28 7.83 0.00 0.00 897436.91 32667.18 916294.10 00:06:42.916 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0x0 length 0x8000 00:06:42.916 Nvme2n3 : 5.92 125.27 7.83 0.00 0.00 873773.84 35490.26 955010.76 00:06:42.916 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0x8000 length 0x8000 00:06:42.916 Nvme2n3 : 5.93 129.50 8.09 0.00 0.00 845849.99 64527.75 935652.43 00:06:42.916 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.916 Verification LBA range: start 0x0 length 0x2000 00:06:42.916 Nvme3n1 : 5.94 140.13 8.76 0.00 0.00 767286.60 1046.06 980821.86 00:06:42.917 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.917 Verification LBA range: start 0x2000 length 0x2000 00:06:42.917 Nvme3n1 : 5.96 146.57 9.16 0.00 0.00 728873.95 3705.30 961463.53 00:06:42.917 [2024-12-09T13:59:44.711Z] =================================================================================================================== 00:06:42.917 [2024-12-09T13:59:44.711Z] Total : 1495.11 93.44 0.00 0.00 921008.14 1046.06 1038896.84 00:06:44.304 00:06:44.304 real 0m8.517s 00:06:44.304 user 0m15.923s 00:06:44.304 sys 0m0.331s 00:06:44.304 13:59:45 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.304 ************************************ 00:06:44.304 END TEST bdev_verify_big_io 00:06:44.304 ************************************ 00:06:44.304 13:59:45 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:44.304 13:59:45 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.304 13:59:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:44.304 13:59:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.304 13:59:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.304 ************************************ 00:06:44.304 START TEST bdev_write_zeroes 00:06:44.304 ************************************ 00:06:44.304 13:59:45 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.304 [2024-12-09 13:59:46.002616] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:44.304 [2024-12-09 13:59:46.002736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60612 ] 00:06:44.563 [2024-12-09 13:59:46.158086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.564 [2024-12-09 13:59:46.263500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.135 Running I/O for 1 seconds... 00:06:46.369 50688.00 IOPS, 198.00 MiB/s 00:06:46.369 Latency(us) 00:06:46.369 [2024-12-09T13:59:48.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.369 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.369 Nvme0n1 : 1.02 8479.87 33.12 0.00 0.00 15064.63 5595.77 26819.35 00:06:46.369 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.369 Nvme1n1 : 1.02 8470.16 33.09 0.00 0.00 15063.46 10687.41 22685.54 00:06:46.369 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.369 Nvme2n1 : 1.02 8460.47 33.05 0.00 0.00 14996.30 7612.26 21173.17 00:06:46.369 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.369 Nvme2n2 : 1.02 8450.81 33.01 0.00 0.00 14987.72 9376.69 20870.70 00:06:46.369 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.369 Nvme2n3 : 1.02 8441.20 32.97 0.00 0.00 14981.02 9124.63 22383.06 00:06:46.369 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.369 Nvme3n1 : 1.02 8431.56 32.94 0.00 0.00 14973.73 8116.38 23492.14 00:06:46.369 [2024-12-09T13:59:48.163Z] =================================================================================================================== 00:06:46.369 [2024-12-09T13:59:48.163Z] Total : 50734.07 198.18 0.00 0.00 15011.14 5595.77 26819.35 00:06:46.940 00:06:46.940 real 0m2.681s 00:06:46.940 user 0m2.393s 00:06:46.940 sys 0m0.170s 00:06:46.940 ************************************ 00:06:46.940 END TEST bdev_write_zeroes 00:06:46.940 ************************************ 00:06:46.940 13:59:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.940 13:59:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:46.940 13:59:48 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:46.940 13:59:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:46.940 13:59:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.940 13:59:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.940 ************************************ 00:06:46.940 START TEST bdev_json_nonenclosed 00:06:46.940 ************************************ 00:06:46.940 13:59:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:47.201 [2024-12-09 13:59:48.750557] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:47.201 [2024-12-09 13:59:48.750677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60667 ] 00:06:47.201 [2024-12-09 13:59:48.907710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.462 [2024-12-09 13:59:49.009713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.462 [2024-12-09 13:59:49.009801] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:47.462 [2024-12-09 13:59:49.009818] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:47.462 [2024-12-09 13:59:49.009827] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.462 00:06:47.462 real 0m0.505s 00:06:47.462 user 0m0.300s 00:06:47.462 sys 0m0.101s 00:06:47.462 13:59:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.462 ************************************ 00:06:47.462 END TEST bdev_json_nonenclosed 00:06:47.462 ************************************ 00:06:47.462 13:59:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:47.462 13:59:49 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:47.462 13:59:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:47.462 13:59:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.462 13:59:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:47.462 ************************************ 00:06:47.462 START TEST bdev_json_nonarray 00:06:47.462 ************************************ 00:06:47.462 13:59:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:47.724 [2024-12-09 13:59:49.316061] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:47.724 [2024-12-09 13:59:49.316176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60687 ] 00:06:47.724 [2024-12-09 13:59:49.477999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.985 [2024-12-09 13:59:49.582698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.985 [2024-12-09 13:59:49.582788] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:47.985 [2024-12-09 13:59:49.582805] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:47.985 [2024-12-09 13:59:49.582815] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.985 00:06:47.985 real 0m0.523s 00:06:47.985 user 0m0.330s 00:06:47.985 sys 0m0.088s 00:06:47.985 13:59:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.985 13:59:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:47.985 ************************************ 00:06:47.985 END TEST bdev_json_nonarray 00:06:47.985 ************************************ 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:48.246 13:59:49 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:48.246 00:06:48.246 real 0m39.986s 00:06:48.246 user 0m59.937s 00:06:48.246 sys 0m6.492s 00:06:48.246 13:59:49 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.246 ************************************ 00:06:48.246 END TEST blockdev_nvme 00:06:48.246 ************************************ 00:06:48.246 13:59:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.246 13:59:49 -- spdk/autotest.sh@209 -- # uname -s 00:06:48.246 13:59:49 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:48.246 13:59:49 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:48.246 13:59:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.246 13:59:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.246 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:06:48.246 ************************************ 00:06:48.246 START TEST blockdev_nvme_gpt 00:06:48.246 ************************************ 00:06:48.246 13:59:49 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:48.246 * Looking for test storage... 00:06:48.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:48.246 13:59:49 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:48.246 13:59:49 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:48.246 13:59:49 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.246 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.246 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.507 13:59:50 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:48.507 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.507 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:48.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.507 --rc genhtml_branch_coverage=1 00:06:48.507 --rc genhtml_function_coverage=1 00:06:48.507 --rc genhtml_legend=1 00:06:48.507 --rc geninfo_all_blocks=1 00:06:48.507 --rc geninfo_unexecuted_blocks=1 00:06:48.507 00:06:48.507 ' 00:06:48.507 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:48.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.507 --rc genhtml_branch_coverage=1 00:06:48.507 --rc genhtml_function_coverage=1 00:06:48.507 --rc genhtml_legend=1 00:06:48.507 --rc geninfo_all_blocks=1 00:06:48.507 --rc geninfo_unexecuted_blocks=1 00:06:48.507 00:06:48.507 ' 00:06:48.507 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:48.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.507 --rc genhtml_branch_coverage=1 00:06:48.507 --rc genhtml_function_coverage=1 00:06:48.507 --rc genhtml_legend=1 00:06:48.507 --rc geninfo_all_blocks=1 00:06:48.507 --rc geninfo_unexecuted_blocks=1 00:06:48.507 00:06:48.507 ' 00:06:48.507 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:48.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.507 --rc genhtml_branch_coverage=1 00:06:48.507 --rc genhtml_function_coverage=1 00:06:48.507 --rc genhtml_legend=1 00:06:48.507 --rc geninfo_all_blocks=1 00:06:48.508 --rc geninfo_unexecuted_blocks=1 00:06:48.508 00:06:48.508 ' 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60771 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60771 00:06:48.508 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60771 ']' 00:06:48.508 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.508 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.508 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.508 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.508 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:48.508 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.508 [2024-12-09 13:59:50.129858] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:48.508 [2024-12-09 13:59:50.129971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60771 ] 00:06:48.508 [2024-12-09 13:59:50.287037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.768 [2024-12-09 13:59:50.389703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.339 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.339 13:59:50 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:49.339 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:49.339 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:49.339 13:59:50 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:49.600 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:49.861 Waiting for block devices as requested 00:06:49.861 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:49.861 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:49.861 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:50.122 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:55.513 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:55.513 BYT; 00:06:55.513 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:55.513 BYT; 00:06:55.513 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:55.513 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:55.514 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:55.514 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:55.514 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:55.514 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:55.514 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:55.514 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:55.514 13:59:56 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:55.514 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:55.514 13:59:56 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:56.456 The operation has completed successfully. 00:06:56.456 13:59:57 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:57.391 The operation has completed successfully. 00:06:57.391 13:59:58 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:57.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:58.216 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:58.216 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:58.216 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:58.216 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:58.216 13:59:59 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:58.216 13:59:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.216 13:59:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.216 [] 00:06:58.216 13:59:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.216 13:59:59 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:58.216 13:59:59 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:58.216 13:59:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:58.216 13:59:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:58.216 13:59:59 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:58.216 13:59:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.216 13:59:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.473 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.473 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:58.473 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.473 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.473 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.473 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:58.473 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:58.473 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.473 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.473 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.473 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:58.473 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.473 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.731 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.731 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:58.731 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.731 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.731 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.731 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:58.731 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:58.731 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.731 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:58.731 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.731 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.731 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:58.731 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:58.732 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "054bfb78-1040-45ef-805b-e2127c3d83fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "054bfb78-1040-45ef-805b-e2127c3d83fb",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c059990a-7df8-43d1-8c8b-f93306ba1b5f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c059990a-7df8-43d1-8c8b-f93306ba1b5f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ecbaf644-2490-4e52-a7f1-f97ed4fd5115"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ecbaf644-2490-4e52-a7f1-f97ed4fd5115",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "983037bc-6189-49c4-a792-b226b37e3399"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "983037bc-6189-49c4-a792-b226b37e3399",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1da0b9fe-c80f-486c-94e4-614d8470cc8d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1da0b9fe-c80f-486c-94e4-614d8470cc8d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:58.732 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:58.732 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:58.732 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:58.732 14:00:00 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60771 00:06:58.732 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60771 ']' 00:06:58.732 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60771 00:06:58.732 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:58.732 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.732 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60771 00:06:58.732 killing process with pid 60771 00:06:58.732 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.732 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.732 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60771' 00:06:58.732 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60771 00:06:58.732 14:00:00 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60771 00:07:00.630 14:00:01 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:00.630 14:00:01 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:00.630 14:00:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:00.630 14:00:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.630 14:00:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:00.630 ************************************ 00:07:00.630 START TEST bdev_hello_world 00:07:00.630 ************************************ 00:07:00.630 14:00:01 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:00.630 [2024-12-09 14:00:01.989869] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:00.630 [2024-12-09 14:00:01.990186] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61398 ] 00:07:00.630 [2024-12-09 14:00:02.144492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.630 [2024-12-09 14:00:02.245153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.195 [2024-12-09 14:00:02.789016] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:01.195 [2024-12-09 14:00:02.789073] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:01.195 [2024-12-09 14:00:02.789097] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:01.195 [2024-12-09 14:00:02.791644] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:01.195 [2024-12-09 14:00:02.792094] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:01.195 [2024-12-09 14:00:02.792123] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:01.195 [2024-12-09 14:00:02.792327] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:01.195 00:07:01.195 [2024-12-09 14:00:02.792350] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:01.761 00:07:01.761 real 0m1.611s 00:07:01.761 user 0m1.322s 00:07:01.761 sys 0m0.181s 00:07:01.761 14:00:03 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.761 14:00:03 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:01.761 ************************************ 00:07:01.761 END TEST bdev_hello_world 00:07:01.761 ************************************ 00:07:02.020 14:00:03 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:02.020 14:00:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:02.020 14:00:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.020 14:00:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.020 ************************************ 00:07:02.020 START TEST bdev_bounds 00:07:02.020 ************************************ 00:07:02.020 Process bdevio pid: 61429 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61429 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61429' 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61429 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61429 ']' 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:02.020 14:00:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:02.020 [2024-12-09 14:00:03.639800] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:02.020 [2024-12-09 14:00:03.640079] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61429 ] 00:07:02.020 [2024-12-09 14:00:03.802149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.277 [2024-12-09 14:00:03.908845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.277 [2024-12-09 14:00:03.909034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.277 [2024-12-09 14:00:03.909164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.843 14:00:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.843 14:00:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:02.843 14:00:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:03.101 I/O targets: 00:07:03.101 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:03.101 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:03.101 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:03.101 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:03.101 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:03.101 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:03.101 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:03.101 00:07:03.101 00:07:03.101 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.101 http://cunit.sourceforge.net/ 00:07:03.101 00:07:03.101 00:07:03.101 Suite: bdevio tests on: Nvme3n1 00:07:03.101 Test: blockdev write read block ...passed 00:07:03.101 Test: blockdev write zeroes read block ...passed 00:07:03.101 Test: blockdev write zeroes read no split ...passed 00:07:03.101 Test: blockdev write zeroes read split ...passed 00:07:03.101 Test: blockdev write zeroes read split partial ...passed 00:07:03.101 Test: blockdev reset ...[2024-12-09 14:00:04.685899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:03.101 [2024-12-09 14:00:04.688576] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:03.101 passed 00:07:03.101 Test: blockdev write read 8 blocks ...passed 00:07:03.101 Test: blockdev write read size > 128k ...passed 00:07:03.101 Test: blockdev write read invalid size ...passed 00:07:03.101 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.101 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.101 Test: blockdev write read max offset ...passed 00:07:03.101 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.101 Test: blockdev writev readv 8 blocks ...passed 00:07:03.101 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.101 Test: blockdev writev readv block ...passed 00:07:03.101 Test: blockdev writev readv size > 128k ...passed 00:07:03.101 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.101 Test: blockdev comparev and writev ...[2024-12-09 14:00:04.694368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:03.101 Test: blockdev nvme passthru rw ...passed 00:07:03.101 Test: blockdev nvme passthru vendor specific ...passed 00:07:03.101 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2b1804000 len:0x1000 00:07:03.101 [2024-12-09 14:00:04.694507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.101 [2024-12-09 14:00:04.694939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:03.101 [2024-12-09 14:00:04.694966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.101 passed 00:07:03.101 Test: blockdev copy ...passed 00:07:03.101 Suite: bdevio tests on: Nvme2n3 00:07:03.101 Test: blockdev write read block ...passed 00:07:03.101 Test: blockdev write zeroes read block ...passed 00:07:03.101 Test: blockdev write zeroes read no split ...passed 00:07:03.101 Test: blockdev write zeroes read split ...passed 00:07:03.101 Test: blockdev write zeroes read split partial ...passed 00:07:03.101 Test: blockdev reset ...[2024-12-09 14:00:04.739148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:03.101 passed 00:07:03.101 Test: blockdev write read 8 blocks ...[2024-12-09 14:00:04.742007] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:03.101 passed 00:07:03.101 Test: blockdev write read size > 128k ...passed 00:07:03.102 Test: blockdev write read invalid size ...passed 00:07:03.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.102 Test: blockdev write read max offset ...passed 00:07:03.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.102 Test: blockdev writev readv 8 blocks ...passed 00:07:03.102 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.102 Test: blockdev writev readv block ...passed 00:07:03.102 Test: blockdev writev readv size > 128k ...passed 00:07:03.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.102 Test: blockdev comparev and writev ...[2024-12-09 14:00:04.747276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1802000 len:0x1000 00:07:03.102 [2024-12-09 14:00:04.747317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.102 passed 00:07:03.102 Test: blockdev nvme passthru rw ...passed 00:07:03.102 Test: blockdev nvme passthru vendor specific ...[2024-12-09 14:00:04.747869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:03.102 Test: blockdev nvme admin passthru ...passed 00:07:03.102 Test: blockdev copy ... cid:190 PRP1 0x0 PRP2 0x0 00:07:03.102 [2024-12-09 14:00:04.747976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.102 passed 00:07:03.102 Suite: bdevio tests on: Nvme2n2 00:07:03.102 Test: blockdev write read block ...passed 00:07:03.102 Test: blockdev write zeroes read block ...passed 00:07:03.102 Test: blockdev write zeroes read no split ...passed 00:07:03.102 Test: blockdev write zeroes read split ...passed 00:07:03.102 Test: blockdev write zeroes read split partial ...passed 00:07:03.102 Test: blockdev reset ...[2024-12-09 14:00:04.791922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:03.102 [2024-12-09 14:00:04.794837] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:03.102 passed 00:07:03.102 Test: blockdev write read 8 blocks ...passed 00:07:03.102 Test: blockdev write read size > 128k ...passed 00:07:03.102 Test: blockdev write read invalid size ...passed 00:07:03.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.102 Test: blockdev write read max offset ...passed 00:07:03.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.102 Test: blockdev writev readv 8 blocks ...passed 00:07:03.102 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.102 Test: blockdev writev readv block ...passed 00:07:03.102 Test: blockdev writev readv size > 128k ...passed 00:07:03.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.102 Test: blockdev comparev and writev ...[2024-12-09 14:00:04.800869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3a38000 len:0x1000 00:07:03.102 [2024-12-09 14:00:04.800998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.102 passed 00:07:03.102 Test: blockdev nvme passthru rw ...passed 00:07:03.102 Test: blockdev nvme passthru vendor specific ...[2024-12-09 14:00:04.801622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:03.102 [2024-12-09 14:00:04.801712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.102 passed 00:07:03.102 Test: blockdev nvme admin passthru ...passed 00:07:03.102 Test: blockdev copy ...passed 00:07:03.102 Suite: bdevio tests on: Nvme2n1 00:07:03.102 Test: blockdev write read block ...passed 00:07:03.102 Test: blockdev write zeroes read block ...passed 00:07:03.102 Test: blockdev write zeroes read no split ...passed 00:07:03.102 Test: blockdev write zeroes read split ...passed 00:07:03.102 Test: blockdev write zeroes read split partial ...passed 00:07:03.102 Test: blockdev reset ...[2024-12-09 14:00:04.852408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:03.102 [2024-12-09 14:00:04.855369] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:03.102 passed 00:07:03.102 Test: blockdev write read 8 blocks ...passed 00:07:03.102 Test: blockdev write read size > 128k ...passed 00:07:03.102 Test: blockdev write read invalid size ...passed 00:07:03.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.102 Test: blockdev write read max offset ...passed 00:07:03.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.102 Test: blockdev writev readv 8 blocks ...passed 00:07:03.102 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.102 Test: blockdev writev readv block ...passed 00:07:03.102 Test: blockdev writev readv size > 128k ...passed 00:07:03.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.102 Test: blockdev comparev and writev ...[2024-12-09 14:00:04.861610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3a34000 len:0x1000 00:07:03.102 [2024-12-09 14:00:04.861746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.102 passed 00:07:03.102 Test: blockdev nvme passthru rw ...passed 00:07:03.102 Test: blockdev nvme passthru vendor specific ...[2024-12-09 14:00:04.862435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:03.102 [2024-12-09 14:00:04.862530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.102 passed 00:07:03.102 Test: blockdev nvme admin passthru ...passed 00:07:03.102 Test: blockdev copy ...passed 00:07:03.102 Suite: bdevio tests on: Nvme1n1p2 00:07:03.102 Test: blockdev write read block ...passed 00:07:03.102 Test: blockdev write zeroes read block ...passed 00:07:03.102 Test: blockdev write zeroes read no split ...passed 00:07:03.102 Test: blockdev write zeroes read split ...passed 00:07:03.360 Test: blockdev write zeroes read split partial ...passed 00:07:03.360 Test: blockdev reset ...[2024-12-09 14:00:04.906233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:03.360 passed 00:07:03.360 Test: blockdev write read 8 blocks ...[2024-12-09 14:00:04.909884] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:03.360 passed 00:07:03.360 Test: blockdev write read size > 128k ...passed 00:07:03.360 Test: blockdev write read invalid size ...passed 00:07:03.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.360 Test: blockdev write read max offset ...passed 00:07:03.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.360 Test: blockdev writev readv 8 blocks ...passed 00:07:03.360 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.360 Test: blockdev writev readv block ...passed 00:07:03.360 Test: blockdev writev readv size > 128k ...passed 00:07:03.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.360 Test: blockdev comparev and writev ...[2024-12-09 14:00:04.916012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c3a30000 len:0x1000 00:07:03.360 [2024-12-09 14:00:04.916051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.360 passed 00:07:03.360 Test: blockdev nvme passthru rw ...passed 00:07:03.360 Test: blockdev nvme passthru vendor specific ...passed 00:07:03.360 Test: blockdev nvme admin passthru ...passed 00:07:03.360 Test: blockdev copy ...passed 00:07:03.360 Suite: bdevio tests on: Nvme1n1p1 00:07:03.360 Test: blockdev write read block ...passed 00:07:03.360 Test: blockdev write zeroes read block ...passed 00:07:03.360 Test: blockdev write zeroes read no split ...passed 00:07:03.360 Test: blockdev write zeroes read split ...passed 00:07:03.360 Test: blockdev write zeroes read split partial ...passed 00:07:03.360 Test: blockdev reset ...[2024-12-09 14:00:04.970450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:03.360 [2024-12-09 14:00:04.973315] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:03.360 passed 00:07:03.360 Test: blockdev write read 8 blocks ...passed 00:07:03.360 Test: blockdev write read size > 128k ...passed 00:07:03.360 Test: blockdev write read invalid size ...passed 00:07:03.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.360 Test: blockdev write read max offset ...passed 00:07:03.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.360 Test: blockdev writev readv 8 blocks ...passed 00:07:03.360 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.360 Test: blockdev writev readv block ...passed 00:07:03.360 Test: blockdev writev readv size > 128k ...passed 00:07:03.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.360 Test: blockdev comparev and writev ...[2024-12-09 14:00:04.980412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b220e000 len:0x1000 00:07:03.360 [2024-12-09 14:00:04.980455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.360 passed 00:07:03.360 Test: blockdev nvme passthru rw ...passed 00:07:03.360 Test: blockdev nvme passthru vendor specific ...passed 00:07:03.360 Test: blockdev nvme admin passthru ...passed 00:07:03.360 Test: blockdev copy ...passed 00:07:03.360 Suite: bdevio tests on: Nvme0n1 00:07:03.360 Test: blockdev write read block ...passed 00:07:03.360 Test: blockdev write zeroes read block ...passed 00:07:03.360 Test: blockdev write zeroes read no split ...passed 00:07:03.360 Test: blockdev write zeroes read split ...passed 00:07:03.360 Test: blockdev write zeroes read split partial ...passed 00:07:03.360 Test: blockdev reset ...[2024-12-09 14:00:05.052959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:03.360 [2024-12-09 14:00:05.055727] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:03.360 passed 00:07:03.360 Test: blockdev write read 8 blocks ...passed 00:07:03.360 Test: blockdev write read size > 128k ...passed 00:07:03.360 Test: blockdev write read invalid size ...passed 00:07:03.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.360 Test: blockdev write read max offset ...passed 00:07:03.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.360 Test: blockdev writev readv 8 blocks ...passed 00:07:03.360 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.360 Test: blockdev writev readv block ...passed 00:07:03.360 Test: blockdev writev readv size > 128k ...passed 00:07:03.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.360 Test: blockdev comparev and writev ...passed 00:07:03.360 Test: blockdev nvme passthru rw ...[2024-12-09 14:00:05.061682] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:03.361 separate metadata which is not supported yet. 00:07:03.361 passed 00:07:03.361 Test: blockdev nvme passthru vendor specific ...passed 00:07:03.361 Test: blockdev nvme admin passthru ...[2024-12-09 14:00:05.062243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:03.361 [2024-12-09 14:00:05.062335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:03.361 passed 00:07:03.361 Test: blockdev copy ...passed 00:07:03.361 00:07:03.361 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.361 suites 7 7 n/a 0 0 00:07:03.361 tests 161 161 161 0 0 00:07:03.361 asserts 1025 1025 1025 0 n/a 00:07:03.361 00:07:03.361 Elapsed time = 1.126 seconds 00:07:03.361 0 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61429 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61429 ']' 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61429 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61429 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61429' 00:07:03.361 killing process with pid 61429 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61429 00:07:03.361 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61429 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:04.293 00:07:04.293 real 0m2.195s 00:07:04.293 user 0m5.683s 00:07:04.293 sys 0m0.280s 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:04.293 ************************************ 00:07:04.293 END TEST bdev_bounds 00:07:04.293 ************************************ 00:07:04.293 14:00:05 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:04.293 14:00:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:04.293 14:00:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.293 14:00:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:04.293 ************************************ 00:07:04.293 START TEST bdev_nbd 00:07:04.293 ************************************ 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61489 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61489 /var/tmp/spdk-nbd.sock 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:04.293 14:00:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61489 ']' 00:07:04.294 14:00:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.294 14:00:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.294 14:00:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.294 14:00:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.294 14:00:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:04.294 [2024-12-09 14:00:05.881067] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:04.294 [2024-12-09 14:00:05.881376] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.294 [2024-12-09 14:00:06.036827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.559 [2024-12-09 14:00:06.137197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.143 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.401 1+0 records in 00:07:05.401 1+0 records out 00:07:05.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260185 s, 15.7 MB/s 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.401 14:00:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:05.659 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:05.659 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:05.659 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:05.659 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.659 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.659 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.659 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.660 1+0 records in 00:07:05.660 1+0 records out 00:07:05.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385457 s, 10.6 MB/s 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:05.660 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.918 1+0 records in 00:07:05.918 1+0 records out 00:07:05.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432807 s, 9.5 MB/s 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.918 1+0 records in 00:07:05.918 1+0 records out 00:07:05.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473985 s, 8.6 MB/s 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.918 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.176 1+0 records in 00:07:06.176 1+0 records out 00:07:06.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465166 s, 8.8 MB/s 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.176 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.177 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.177 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.177 14:00:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.177 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.177 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:06.177 14:00:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.435 1+0 records in 00:07:06.435 1+0 records out 00:07:06.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385327 s, 10.6 MB/s 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:06.435 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.692 1+0 records in 00:07:06.692 1+0 records out 00:07:06.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563137 s, 7.3 MB/s 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:06.692 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.949 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:06.949 { 00:07:06.949 "nbd_device": "/dev/nbd0", 00:07:06.949 "bdev_name": "Nvme0n1" 00:07:06.949 }, 00:07:06.949 { 00:07:06.949 "nbd_device": "/dev/nbd1", 00:07:06.949 "bdev_name": "Nvme1n1p1" 00:07:06.949 }, 00:07:06.949 { 00:07:06.949 "nbd_device": "/dev/nbd2", 00:07:06.949 "bdev_name": "Nvme1n1p2" 00:07:06.949 }, 00:07:06.949 { 00:07:06.949 "nbd_device": "/dev/nbd3", 00:07:06.949 "bdev_name": "Nvme2n1" 00:07:06.949 }, 00:07:06.949 { 00:07:06.949 "nbd_device": "/dev/nbd4", 00:07:06.949 "bdev_name": "Nvme2n2" 00:07:06.949 }, 00:07:06.949 { 00:07:06.949 "nbd_device": "/dev/nbd5", 00:07:06.949 "bdev_name": "Nvme2n3" 00:07:06.949 }, 00:07:06.949 { 00:07:06.950 "nbd_device": "/dev/nbd6", 00:07:06.950 "bdev_name": "Nvme3n1" 00:07:06.950 } 00:07:06.950 ]' 00:07:06.950 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:06.950 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:06.950 { 00:07:06.950 "nbd_device": "/dev/nbd0", 00:07:06.950 "bdev_name": "Nvme0n1" 00:07:06.950 }, 00:07:06.950 { 00:07:06.950 "nbd_device": "/dev/nbd1", 00:07:06.950 "bdev_name": "Nvme1n1p1" 00:07:06.950 }, 00:07:06.950 { 00:07:06.950 "nbd_device": "/dev/nbd2", 00:07:06.950 "bdev_name": "Nvme1n1p2" 00:07:06.950 }, 00:07:06.950 { 00:07:06.950 "nbd_device": "/dev/nbd3", 00:07:06.950 "bdev_name": "Nvme2n1" 00:07:06.950 }, 00:07:06.950 { 00:07:06.950 "nbd_device": "/dev/nbd4", 00:07:06.950 "bdev_name": "Nvme2n2" 00:07:06.950 }, 00:07:06.950 { 00:07:06.950 "nbd_device": "/dev/nbd5", 00:07:06.950 "bdev_name": "Nvme2n3" 00:07:06.950 }, 00:07:06.950 { 00:07:06.950 "nbd_device": "/dev/nbd6", 00:07:06.950 "bdev_name": "Nvme3n1" 00:07:06.950 } 00:07:06.950 ]' 00:07:06.950 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:06.950 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:06.950 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.950 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:06.950 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.950 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.950 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.950 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.207 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.207 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.207 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.207 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.207 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.207 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.207 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.207 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.207 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.207 14:00:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.463 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.463 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.463 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.463 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.463 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.463 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.463 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.463 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.463 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.463 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:07.720 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:07.720 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:07.720 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.721 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:07.977 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:07.977 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:07.977 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:07.977 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.977 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.977 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:07.977 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.977 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.977 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.977 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:08.234 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:08.234 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:08.234 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:08.234 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.234 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.234 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:08.234 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.234 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.234 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.234 14:00:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.495 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.754 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:09.011 /dev/nbd0 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.012 1+0 records in 00:07:09.012 1+0 records out 00:07:09.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375039 s, 10.9 MB/s 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.012 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:09.269 /dev/nbd1 00:07:09.269 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.269 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.269 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:09.269 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.269 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.269 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.270 1+0 records in 00:07:09.270 1+0 records out 00:07:09.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045033 s, 9.1 MB/s 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.270 14:00:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:09.270 /dev/nbd10 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.527 1+0 records in 00:07:09.527 1+0 records out 00:07:09.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506659 s, 8.1 MB/s 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:09.527 /dev/nbd11 00:07:09.527 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.784 1+0 records in 00:07:09.784 1+0 records out 00:07:09.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521271 s, 7.9 MB/s 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:09.784 /dev/nbd12 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.784 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.784 1+0 records in 00:07:09.784 1+0 records out 00:07:09.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385939 s, 10.6 MB/s 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:10.041 /dev/nbd13 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.041 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.042 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.042 1+0 records in 00:07:10.042 1+0 records out 00:07:10.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448803 s, 9.1 MB/s 00:07:10.042 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.042 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.042 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.042 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.042 14:00:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.042 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.042 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:10.042 14:00:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:10.299 /dev/nbd14 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.299 1+0 records in 00:07:10.299 1+0 records out 00:07:10.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585764 s, 7.0 MB/s 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.299 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.556 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd0", 00:07:10.556 "bdev_name": "Nvme0n1" 00:07:10.556 }, 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd1", 00:07:10.556 "bdev_name": "Nvme1n1p1" 00:07:10.556 }, 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd10", 00:07:10.556 "bdev_name": "Nvme1n1p2" 00:07:10.556 }, 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd11", 00:07:10.556 "bdev_name": "Nvme2n1" 00:07:10.556 }, 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd12", 00:07:10.556 "bdev_name": "Nvme2n2" 00:07:10.556 }, 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd13", 00:07:10.556 "bdev_name": "Nvme2n3" 00:07:10.556 }, 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd14", 00:07:10.556 "bdev_name": "Nvme3n1" 00:07:10.556 } 00:07:10.556 ]' 00:07:10.556 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.556 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd0", 00:07:10.556 "bdev_name": "Nvme0n1" 00:07:10.556 }, 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd1", 00:07:10.556 "bdev_name": "Nvme1n1p1" 00:07:10.556 }, 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd10", 00:07:10.556 "bdev_name": "Nvme1n1p2" 00:07:10.556 }, 00:07:10.556 { 00:07:10.556 "nbd_device": "/dev/nbd11", 00:07:10.557 "bdev_name": "Nvme2n1" 00:07:10.557 }, 00:07:10.557 { 00:07:10.557 "nbd_device": "/dev/nbd12", 00:07:10.557 "bdev_name": "Nvme2n2" 00:07:10.557 }, 00:07:10.557 { 00:07:10.557 "nbd_device": "/dev/nbd13", 00:07:10.557 "bdev_name": "Nvme2n3" 00:07:10.557 }, 00:07:10.557 { 00:07:10.557 "nbd_device": "/dev/nbd14", 00:07:10.557 "bdev_name": "Nvme3n1" 00:07:10.557 } 00:07:10.557 ]' 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:10.557 /dev/nbd1 00:07:10.557 /dev/nbd10 00:07:10.557 /dev/nbd11 00:07:10.557 /dev/nbd12 00:07:10.557 /dev/nbd13 00:07:10.557 /dev/nbd14' 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:10.557 /dev/nbd1 00:07:10.557 /dev/nbd10 00:07:10.557 /dev/nbd11 00:07:10.557 /dev/nbd12 00:07:10.557 /dev/nbd13 00:07:10.557 /dev/nbd14' 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:10.557 256+0 records in 00:07:10.557 256+0 records out 00:07:10.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579059 s, 181 MB/s 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.557 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:10.814 256+0 records in 00:07:10.814 256+0 records out 00:07:10.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.071355 s, 14.7 MB/s 00:07:10.814 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.814 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.814 256+0 records in 00:07:10.814 256+0 records out 00:07:10.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0832607 s, 12.6 MB/s 00:07:10.814 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.814 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:10.814 256+0 records in 00:07:10.814 256+0 records out 00:07:10.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0739615 s, 14.2 MB/s 00:07:10.814 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.814 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:11.071 256+0 records in 00:07:11.071 256+0 records out 00:07:11.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0704765 s, 14.9 MB/s 00:07:11.071 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.071 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:11.071 256+0 records in 00:07:11.071 256+0 records out 00:07:11.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0736322 s, 14.2 MB/s 00:07:11.072 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.072 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:11.072 256+0 records in 00:07:11.072 256+0 records out 00:07:11.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0725205 s, 14.5 MB/s 00:07:11.072 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.072 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:11.329 256+0 records in 00:07:11.329 256+0 records out 00:07:11.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.075106 s, 14.0 MB/s 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.329 14:00:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.586 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.586 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.586 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.586 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.587 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.587 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.587 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.587 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.587 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.587 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.844 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.844 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.844 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.844 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.844 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.844 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.844 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.844 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.844 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.844 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.101 14:00:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:12.358 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:12.358 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:12.358 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:12.358 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.358 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.358 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:12.358 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.358 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.358 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.358 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:12.685 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:12.685 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:12.685 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:12.686 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.686 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.686 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:12.686 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.686 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.686 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.686 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:12.959 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:13.217 malloc_lvol_verify 00:07:13.217 14:00:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:13.483 3e481733-7d69-4898-bb54-acdb286a3051 00:07:13.483 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:13.741 ffd23a53-fc71-47cd-b0a4-d33bf253b137 00:07:13.741 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:13.741 /dev/nbd0 00:07:13.741 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:13.741 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:13.741 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:13.741 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:13.741 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:13.999 mke2fs 1.47.0 (5-Feb-2023) 00:07:13.999 Discarding device blocks: 0/4096 done 00:07:13.999 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:13.999 00:07:13.999 Allocating group tables: 0/1 done 00:07:13.999 Writing inode tables: 0/1 done 00:07:13.999 Creating journal (1024 blocks): done 00:07:13.999 Writing superblocks and filesystem accounting information: 0/1 done 00:07:13.999 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61489 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61489 ']' 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61489 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61489 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.999 killing process with pid 61489 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61489' 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61489 00:07:13.999 14:00:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61489 00:07:14.932 14:00:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:14.932 00:07:14.932 real 0m10.715s 00:07:14.932 user 0m15.284s 00:07:14.932 sys 0m3.497s 00:07:14.932 14:00:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.932 14:00:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:14.932 ************************************ 00:07:14.932 END TEST bdev_nbd 00:07:14.932 ************************************ 00:07:14.932 14:00:16 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:14.932 14:00:16 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:14.932 14:00:16 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:14.932 skipping fio tests on NVMe due to multi-ns failures. 00:07:14.932 14:00:16 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:14.932 14:00:16 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:14.932 14:00:16 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:14.932 14:00:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:14.932 14:00:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.932 14:00:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.932 ************************************ 00:07:14.932 START TEST bdev_verify 00:07:14.932 ************************************ 00:07:14.932 14:00:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:14.932 [2024-12-09 14:00:16.625672] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:14.932 [2024-12-09 14:00:16.625771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61901 ] 00:07:15.190 [2024-12-09 14:00:16.780606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.190 [2024-12-09 14:00:16.881819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.190 [2024-12-09 14:00:16.882110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.755 Running I/O for 5 seconds... 00:07:18.117 21248.00 IOPS, 83.00 MiB/s [2024-12-09T14:00:20.847Z] 21760.00 IOPS, 85.00 MiB/s [2024-12-09T14:00:21.781Z] 22848.00 IOPS, 89.25 MiB/s [2024-12-09T14:00:22.722Z] 24272.00 IOPS, 94.81 MiB/s [2024-12-09T14:00:22.722Z] 23731.20 IOPS, 92.70 MiB/s 00:07:20.928 Latency(us) 00:07:20.928 [2024-12-09T14:00:22.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.928 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x0 length 0xbd0bd 00:07:20.928 Nvme0n1 : 5.06 1670.06 6.52 0.00 0.00 76436.44 13812.97 76223.41 00:07:20.928 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:20.928 Nvme0n1 : 5.05 1672.35 6.53 0.00 0.00 76293.17 14115.45 78643.20 00:07:20.928 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x0 length 0x4ff80 00:07:20.928 Nvme1n1p1 : 5.06 1668.75 6.52 0.00 0.00 76369.76 14720.39 68964.04 00:07:20.928 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:20.928 Nvme1n1p1 : 5.05 1671.85 6.53 0.00 0.00 76139.43 15526.99 69770.63 00:07:20.928 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x0 length 0x4ff7f 00:07:20.928 Nvme1n1p2 : 5.06 1668.12 6.52 0.00 0.00 76254.51 15930.29 68157.44 00:07:20.928 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:20.928 Nvme1n1p2 : 5.05 1671.34 6.53 0.00 0.00 75997.82 16232.76 67754.14 00:07:20.928 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x0 length 0x80000 00:07:20.928 Nvme2n1 : 5.07 1667.52 6.51 0.00 0.00 76130.77 17341.83 70577.23 00:07:20.928 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x80000 length 0x80000 00:07:20.928 Nvme2n1 : 5.07 1678.26 6.56 0.00 0.00 75533.10 4511.90 70173.93 00:07:20.928 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x0 length 0x80000 00:07:20.928 Nvme2n2 : 5.07 1666.91 6.51 0.00 0.00 76006.69 16736.89 73400.32 00:07:20.928 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x80000 length 0x80000 00:07:20.928 Nvme2n2 : 5.09 1684.58 6.58 0.00 0.00 75201.84 13913.80 72997.02 00:07:20.928 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x0 length 0x80000 00:07:20.928 Nvme2n3 : 5.08 1674.58 6.54 0.00 0.00 75542.34 4663.14 74610.22 00:07:20.928 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x80000 length 0x80000 00:07:20.928 Nvme2n3 : 5.09 1684.14 6.58 0.00 0.00 75064.33 11947.72 73400.32 00:07:20.928 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x0 length 0x20000 00:07:20.928 Nvme3n1 : 5.10 1682.55 6.57 0.00 0.00 75132.40 10687.41 73400.32 00:07:20.928 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.928 Verification LBA range: start 0x20000 length 0x20000 00:07:20.928 Nvme3n1 : 5.09 1683.69 6.58 0.00 0.00 74977.27 9225.45 73400.32 00:07:20.928 [2024-12-09T14:00:22.722Z] =================================================================================================================== 00:07:20.928 [2024-12-09T14:00:22.722Z] Total : 23444.70 91.58 0.00 0.00 75788.14 4511.90 78643.20 00:07:22.309 00:07:22.309 real 0m7.312s 00:07:22.309 user 0m13.779s 00:07:22.309 sys 0m0.182s 00:07:22.309 14:00:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.309 14:00:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:22.309 ************************************ 00:07:22.309 END TEST bdev_verify 00:07:22.309 ************************************ 00:07:22.309 14:00:23 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:22.309 14:00:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:22.309 14:00:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.309 14:00:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.309 ************************************ 00:07:22.309 START TEST bdev_verify_big_io 00:07:22.309 ************************************ 00:07:22.309 14:00:23 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:22.309 [2024-12-09 14:00:23.983262] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:22.309 [2024-12-09 14:00:23.983382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61994 ] 00:07:22.567 [2024-12-09 14:00:24.142848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.567 [2024-12-09 14:00:24.242001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.567 [2024-12-09 14:00:24.242172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.135 Running I/O for 5 seconds... 00:07:29.717 1782.00 IOPS, 111.38 MiB/s [2024-12-09T14:00:31.511Z] 2810.00 IOPS, 175.62 MiB/s 00:07:29.717 Latency(us) 00:07:29.717 [2024-12-09T14:00:31.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.717 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x0 length 0xbd0b 00:07:29.717 Nvme0n1 : 5.89 104.33 6.52 0.00 0.00 1152008.00 11695.66 1503496.66 00:07:29.717 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:29.717 Nvme0n1 : 6.02 97.42 6.09 0.00 0.00 1256478.51 19055.85 1322818.95 00:07:29.717 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x0 length 0x4ff8 00:07:29.717 Nvme1n1p1 : 5.90 107.00 6.69 0.00 0.00 1103441.85 112116.97 1277649.53 00:07:29.717 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:29.717 Nvme1n1p1 : 6.02 102.55 6.41 0.00 0.00 1157081.92 93161.94 1135688.47 00:07:29.717 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x0 length 0x4ff7 00:07:29.717 Nvme1n1p2 : 6.05 108.82 6.80 0.00 0.00 1041549.23 97598.23 1329271.73 00:07:29.717 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:29.717 Nvme1n1p2 : 6.02 102.86 6.43 0.00 0.00 1116649.07 93565.24 1219574.55 00:07:29.717 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x0 length 0x8000 00:07:29.717 Nvme2n1 : 6.13 108.07 6.75 0.00 0.00 1011210.88 49807.36 1632552.17 00:07:29.717 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x8000 length 0x8000 00:07:29.717 Nvme2n1 : 6.02 105.48 6.59 0.00 0.00 1063689.21 103244.41 1529307.77 00:07:29.717 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x0 length 0x8000 00:07:29.717 Nvme2n2 : 6.13 111.89 6.99 0.00 0.00 945088.79 79853.10 1922927.06 00:07:29.717 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x8000 length 0x8000 00:07:29.717 Nvme2n2 : 6.12 109.11 6.82 0.00 0.00 993116.42 99614.72 1232480.10 00:07:29.717 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x0 length 0x8000 00:07:29.717 Nvme2n3 : 6.23 126.58 7.91 0.00 0.00 807451.69 10788.23 1935832.62 00:07:29.717 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x8000 length 0x8000 00:07:29.717 Nvme2n3 : 6.17 120.36 7.52 0.00 0.00 880832.07 35288.62 1032444.06 00:07:29.717 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x0 length 0x2000 00:07:29.717 Nvme3n1 : 6.35 194.04 12.13 0.00 0.00 513639.72 431.66 1755154.90 00:07:29.717 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:29.717 Verification LBA range: start 0x2000 length 0x2000 00:07:29.717 Nvme3n1 : 6.18 127.88 7.99 0.00 0.00 800855.28 2419.79 1051802.39 00:07:29.717 [2024-12-09T14:00:31.511Z] =================================================================================================================== 00:07:29.717 [2024-12-09T14:00:31.511Z] Total : 1626.40 101.65 0.00 0.00 951090.10 431.66 1935832.62 00:07:32.248 00:07:32.248 real 0m9.779s 00:07:32.248 user 0m18.613s 00:07:32.248 sys 0m0.233s 00:07:32.248 14:00:33 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.248 14:00:33 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:32.248 ************************************ 00:07:32.248 END TEST bdev_verify_big_io 00:07:32.248 ************************************ 00:07:32.248 14:00:33 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:32.248 14:00:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:32.248 14:00:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.248 14:00:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.248 ************************************ 00:07:32.248 START TEST bdev_write_zeroes 00:07:32.248 ************************************ 00:07:32.248 14:00:33 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:32.248 [2024-12-09 14:00:33.807048] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:32.248 [2024-12-09 14:00:33.807164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62114 ] 00:07:32.248 [2024-12-09 14:00:33.968410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.506 [2024-12-09 14:00:34.069279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.072 Running I/O for 1 seconds... 00:07:34.003 66356.00 IOPS, 259.20 MiB/s 00:07:34.003 Latency(us) 00:07:34.003 [2024-12-09T14:00:35.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.003 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.003 Nvme0n1 : 1.02 9392.45 36.69 0.00 0.00 13597.18 5041.23 68157.44 00:07:34.003 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.003 Nvme1n1p1 : 1.02 9441.31 36.88 0.00 0.00 13506.97 10384.94 56058.49 00:07:34.003 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.003 Nvme1n1p2 : 1.02 9429.43 36.83 0.00 0.00 13478.84 9679.16 56461.78 00:07:34.003 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.003 Nvme2n1 : 1.03 9418.62 36.79 0.00 0.00 13448.48 8721.33 56058.49 00:07:34.003 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.003 Nvme2n2 : 1.03 9407.90 36.75 0.00 0.00 13441.64 8368.44 50815.61 00:07:34.003 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.003 Nvme2n3 : 1.03 9397.15 36.71 0.00 0.00 13431.71 7612.26 50613.96 00:07:34.003 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:34.004 Nvme3n1 : 1.03 9497.20 37.10 0.00 0.00 13267.57 6956.90 44362.83 00:07:34.004 [2024-12-09T14:00:35.798Z] =================================================================================================================== 00:07:34.004 [2024-12-09T14:00:35.798Z] Total : 65984.05 257.75 0.00 0.00 13452.75 5041.23 68157.44 00:07:34.944 00:07:34.945 real 0m2.718s 00:07:34.945 user 0m2.409s 00:07:34.945 sys 0m0.192s 00:07:34.945 14:00:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.945 ************************************ 00:07:34.945 END TEST bdev_write_zeroes 00:07:34.945 ************************************ 00:07:34.945 14:00:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:34.945 14:00:36 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.945 14:00:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:34.945 14:00:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.945 14:00:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.945 ************************************ 00:07:34.945 START TEST bdev_json_nonenclosed 00:07:34.945 ************************************ 00:07:34.945 14:00:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.945 [2024-12-09 14:00:36.570898] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:34.945 [2024-12-09 14:00:36.571018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62167 ] 00:07:34.945 [2024-12-09 14:00:36.724653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.203 [2024-12-09 14:00:36.825862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.203 [2024-12-09 14:00:36.825942] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:35.203 [2024-12-09 14:00:36.825958] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:35.203 [2024-12-09 14:00:36.825967] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.463 00:07:35.463 real 0m0.501s 00:07:35.463 user 0m0.304s 00:07:35.463 sys 0m0.092s 00:07:35.463 14:00:37 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.463 14:00:37 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:35.463 ************************************ 00:07:35.463 END TEST bdev_json_nonenclosed 00:07:35.463 ************************************ 00:07:35.463 14:00:37 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.463 14:00:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:35.463 14:00:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.463 14:00:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.463 ************************************ 00:07:35.463 START TEST bdev_json_nonarray 00:07:35.463 ************************************ 00:07:35.464 14:00:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.464 [2024-12-09 14:00:37.111650] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:35.464 [2024-12-09 14:00:37.111776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62192 ] 00:07:35.722 [2024-12-09 14:00:37.272366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.722 [2024-12-09 14:00:37.373781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.722 [2024-12-09 14:00:37.373875] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:35.722 [2024-12-09 14:00:37.373893] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:35.722 [2024-12-09 14:00:37.373903] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.980 00:07:35.980 real 0m0.506s 00:07:35.980 user 0m0.316s 00:07:35.980 sys 0m0.086s 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.980 ************************************ 00:07:35.980 END TEST bdev_json_nonarray 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:35.980 ************************************ 00:07:35.980 14:00:37 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:35.980 14:00:37 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:35.980 14:00:37 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:35.980 14:00:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.980 14:00:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.980 14:00:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.980 ************************************ 00:07:35.980 START TEST bdev_gpt_uuid 00:07:35.980 ************************************ 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62218 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62218 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62218 ']' 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:35.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.980 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.981 14:00:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.981 [2024-12-09 14:00:37.671124] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:35.981 [2024-12-09 14:00:37.671243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62218 ] 00:07:36.239 [2024-12-09 14:00:37.831956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.239 [2024-12-09 14:00:37.931187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.804 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.804 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:36.804 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:36.804 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.804 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:37.062 Some configs were skipped because the RPC state that can call them passed over. 00:07:37.062 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.062 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:37.062 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.062 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:37.320 { 00:07:37.320 "name": "Nvme1n1p1", 00:07:37.320 "aliases": [ 00:07:37.320 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:37.320 ], 00:07:37.320 "product_name": "GPT Disk", 00:07:37.320 "block_size": 4096, 00:07:37.320 "num_blocks": 655104, 00:07:37.320 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:37.320 "assigned_rate_limits": { 00:07:37.320 "rw_ios_per_sec": 0, 00:07:37.320 "rw_mbytes_per_sec": 0, 00:07:37.320 "r_mbytes_per_sec": 0, 00:07:37.320 "w_mbytes_per_sec": 0 00:07:37.320 }, 00:07:37.320 "claimed": false, 00:07:37.320 "zoned": false, 00:07:37.320 "supported_io_types": { 00:07:37.320 "read": true, 00:07:37.320 "write": true, 00:07:37.320 "unmap": true, 00:07:37.320 "flush": true, 00:07:37.320 "reset": true, 00:07:37.320 "nvme_admin": false, 00:07:37.320 "nvme_io": false, 00:07:37.320 "nvme_io_md": false, 00:07:37.320 "write_zeroes": true, 00:07:37.320 "zcopy": false, 00:07:37.320 "get_zone_info": false, 00:07:37.320 "zone_management": false, 00:07:37.320 "zone_append": false, 00:07:37.320 "compare": true, 00:07:37.320 "compare_and_write": false, 00:07:37.320 "abort": true, 00:07:37.320 "seek_hole": false, 00:07:37.320 "seek_data": false, 00:07:37.320 "copy": true, 00:07:37.320 "nvme_iov_md": false 00:07:37.320 }, 00:07:37.320 "driver_specific": { 00:07:37.320 "gpt": { 00:07:37.320 "base_bdev": "Nvme1n1", 00:07:37.320 "offset_blocks": 256, 00:07:37.320 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:37.320 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:37.320 "partition_name": "SPDK_TEST_first" 00:07:37.320 } 00:07:37.320 } 00:07:37.320 } 00:07:37.320 ]' 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:37.320 { 00:07:37.320 "name": "Nvme1n1p2", 00:07:37.320 "aliases": [ 00:07:37.320 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:37.320 ], 00:07:37.320 "product_name": "GPT Disk", 00:07:37.320 "block_size": 4096, 00:07:37.320 "num_blocks": 655103, 00:07:37.320 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:37.320 "assigned_rate_limits": { 00:07:37.320 "rw_ios_per_sec": 0, 00:07:37.320 "rw_mbytes_per_sec": 0, 00:07:37.320 "r_mbytes_per_sec": 0, 00:07:37.320 "w_mbytes_per_sec": 0 00:07:37.320 }, 00:07:37.320 "claimed": false, 00:07:37.320 "zoned": false, 00:07:37.320 "supported_io_types": { 00:07:37.320 "read": true, 00:07:37.320 "write": true, 00:07:37.320 "unmap": true, 00:07:37.320 "flush": true, 00:07:37.320 "reset": true, 00:07:37.320 "nvme_admin": false, 00:07:37.320 "nvme_io": false, 00:07:37.320 "nvme_io_md": false, 00:07:37.320 "write_zeroes": true, 00:07:37.320 "zcopy": false, 00:07:37.320 "get_zone_info": false, 00:07:37.320 "zone_management": false, 00:07:37.320 "zone_append": false, 00:07:37.320 "compare": true, 00:07:37.320 "compare_and_write": false, 00:07:37.320 "abort": true, 00:07:37.320 "seek_hole": false, 00:07:37.320 "seek_data": false, 00:07:37.320 "copy": true, 00:07:37.320 "nvme_iov_md": false 00:07:37.320 }, 00:07:37.320 "driver_specific": { 00:07:37.320 "gpt": { 00:07:37.320 "base_bdev": "Nvme1n1", 00:07:37.320 "offset_blocks": 655360, 00:07:37.320 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:37.320 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:37.320 "partition_name": "SPDK_TEST_second" 00:07:37.320 } 00:07:37.320 } 00:07:37.320 } 00:07:37.320 ]' 00:07:37.320 14:00:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62218 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62218 ']' 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62218 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62218 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.320 killing process with pid 62218 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62218' 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62218 00:07:37.320 14:00:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62218 00:07:39.224 00:07:39.224 real 0m3.011s 00:07:39.224 user 0m3.125s 00:07:39.224 sys 0m0.384s 00:07:39.224 ************************************ 00:07:39.224 END TEST bdev_gpt_uuid 00:07:39.224 ************************************ 00:07:39.224 14:00:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.224 14:00:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.224 14:00:40 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:39.224 14:00:40 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:39.224 14:00:40 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:39.224 14:00:40 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:39.224 14:00:40 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.224 14:00:40 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:39.224 14:00:40 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:39.224 14:00:40 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:39.224 14:00:40 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:39.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.485 Waiting for block devices as requested 00:07:39.485 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.485 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.746 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.746 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.023 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:45.023 14:00:46 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:45.023 14:00:46 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:45.023 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:45.023 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:45.023 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:45.023 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:45.023 14:00:46 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:45.023 00:07:45.023 real 0m56.874s 00:07:45.023 user 1m13.820s 00:07:45.023 sys 0m7.616s 00:07:45.023 14:00:46 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.023 14:00:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.023 ************************************ 00:07:45.023 END TEST blockdev_nvme_gpt 00:07:45.023 ************************************ 00:07:45.023 14:00:46 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:45.023 14:00:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.023 14:00:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.023 14:00:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.023 ************************************ 00:07:45.023 START TEST nvme 00:07:45.023 ************************************ 00:07:45.023 14:00:46 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:45.281 * Looking for test storage... 00:07:45.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:45.281 14:00:46 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.281 14:00:46 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.281 14:00:46 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.281 14:00:46 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.281 14:00:46 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.281 14:00:46 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.281 14:00:46 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.281 14:00:46 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.281 14:00:46 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.281 14:00:46 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.281 14:00:46 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.281 14:00:46 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.281 14:00:46 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.281 14:00:46 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.281 14:00:46 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.281 14:00:46 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:45.281 14:00:46 nvme -- scripts/common.sh@345 -- # : 1 00:07:45.281 14:00:46 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.281 14:00:46 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.281 14:00:46 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:45.281 14:00:46 nvme -- scripts/common.sh@353 -- # local d=1 00:07:45.281 14:00:46 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.281 14:00:46 nvme -- scripts/common.sh@355 -- # echo 1 00:07:45.281 14:00:46 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.281 14:00:46 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:45.281 14:00:46 nvme -- scripts/common.sh@353 -- # local d=2 00:07:45.281 14:00:46 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.281 14:00:46 nvme -- scripts/common.sh@355 -- # echo 2 00:07:45.281 14:00:46 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.281 14:00:46 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.281 14:00:46 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.281 14:00:46 nvme -- scripts/common.sh@368 -- # return 0 00:07:45.281 14:00:46 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.281 14:00:46 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.281 --rc genhtml_branch_coverage=1 00:07:45.281 --rc genhtml_function_coverage=1 00:07:45.281 --rc genhtml_legend=1 00:07:45.281 --rc geninfo_all_blocks=1 00:07:45.281 --rc geninfo_unexecuted_blocks=1 00:07:45.281 00:07:45.281 ' 00:07:45.281 14:00:46 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.281 --rc genhtml_branch_coverage=1 00:07:45.281 --rc genhtml_function_coverage=1 00:07:45.281 --rc genhtml_legend=1 00:07:45.281 --rc geninfo_all_blocks=1 00:07:45.281 --rc geninfo_unexecuted_blocks=1 00:07:45.281 00:07:45.281 ' 00:07:45.281 14:00:46 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.281 --rc genhtml_branch_coverage=1 00:07:45.281 --rc genhtml_function_coverage=1 00:07:45.281 --rc genhtml_legend=1 00:07:45.281 --rc geninfo_all_blocks=1 00:07:45.281 --rc geninfo_unexecuted_blocks=1 00:07:45.281 00:07:45.281 ' 00:07:45.281 14:00:46 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.281 --rc genhtml_branch_coverage=1 00:07:45.281 --rc genhtml_function_coverage=1 00:07:45.281 --rc genhtml_legend=1 00:07:45.281 --rc geninfo_all_blocks=1 00:07:45.281 --rc geninfo_unexecuted_blocks=1 00:07:45.281 00:07:45.281 ' 00:07:45.281 14:00:46 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:45.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.108 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.108 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.108 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.108 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.108 14:00:47 nvme -- nvme/nvme.sh@79 -- # uname 00:07:46.365 14:00:47 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:46.365 14:00:47 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:46.365 14:00:47 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:46.365 14:00:47 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:46.365 14:00:47 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:46.365 14:00:47 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:46.365 Waiting for stub to ready for secondary processes... 00:07:46.365 14:00:47 nvme -- common/autotest_common.sh@1075 -- # stubpid=62852 00:07:46.365 14:00:47 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:46.365 14:00:47 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:46.365 14:00:47 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62852 ]] 00:07:46.365 14:00:47 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:46.365 14:00:47 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:46.365 [2024-12-09 14:00:47.940249] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:46.365 [2024-12-09 14:00:47.940553] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:47.299 [2024-12-09 14:00:48.727867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.299 [2024-12-09 14:00:48.870451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.299 [2024-12-09 14:00:48.870652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.299 [2024-12-09 14:00:48.871928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.299 [2024-12-09 14:00:48.885077] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:47.299 [2024-12-09 14:00:48.885118] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.299 [2024-12-09 14:00:48.898246] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:47.299 [2024-12-09 14:00:48.898830] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:47.299 [2024-12-09 14:00:48.903043] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.299 [2024-12-09 14:00:48.903363] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:47.299 [2024-12-09 14:00:48.903465] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:47.299 14:00:48 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:47.299 14:00:48 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62852 ]] 00:07:47.299 14:00:48 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:47.299 [2024-12-09 14:00:48.906688] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.299 [2024-12-09 14:00:48.906949] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:47.299 [2024-12-09 14:00:48.907038] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:47.299 [2024-12-09 14:00:48.910653] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:47.299 [2024-12-09 14:00:48.910925] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:47.299 [2024-12-09 14:00:48.911074] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:47.299 [2024-12-09 14:00:48.911165] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:47.299 [2024-12-09 14:00:48.911232] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:48.233 done. 00:07:48.233 14:00:49 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:48.233 14:00:49 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:48.233 14:00:49 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:48.233 14:00:49 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:48.233 14:00:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.233 14:00:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.233 ************************************ 00:07:48.233 START TEST nvme_reset 00:07:48.233 ************************************ 00:07:48.233 14:00:49 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:48.491 Initializing NVMe Controllers 00:07:48.491 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:48.491 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:48.491 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:48.491 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:48.491 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:48.491 00:07:48.491 real 0m0.209s 00:07:48.491 ************************************ 00:07:48.491 END TEST nvme_reset 00:07:48.491 ************************************ 00:07:48.491 user 0m0.077s 00:07:48.491 sys 0m0.086s 00:07:48.491 14:00:50 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.491 14:00:50 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:48.491 14:00:50 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:48.491 14:00:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.491 14:00:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.491 14:00:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.491 ************************************ 00:07:48.491 START TEST nvme_identify 00:07:48.491 ************************************ 00:07:48.491 14:00:50 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:48.491 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:48.491 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:48.491 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:48.491 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:48.491 14:00:50 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:48.491 14:00:50 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:48.491 14:00:50 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:48.491 14:00:50 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:48.491 14:00:50 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:48.491 14:00:50 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:48.491 14:00:50 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:48.491 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:48.752 [2024-12-09 14:00:50.401048] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62885 termina===================================================== 00:07:48.752 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.752 ===================================================== 00:07:48.752 Controller Capabilities/Features 00:07:48.752 ================================ 00:07:48.752 Vendor ID: 1b36 00:07:48.752 Subsystem Vendor ID: 1af4 00:07:48.752 Serial Number: 12340 00:07:48.752 Model Number: QEMU NVMe Ctrl 00:07:48.752 Firmware Version: 8.0.0 00:07:48.752 Recommended Arb Burst: 6 00:07:48.752 IEEE OUI Identifier: 00 54 52 00:07:48.752 Multi-path I/O 00:07:48.752 May have multiple subsystem ports: No 00:07:48.752 May have multiple controllers: No 00:07:48.752 Associated with SR-IOV VF: No 00:07:48.752 Max Data Transfer Size: 524288 00:07:48.752 Max Number of Namespaces: 256 00:07:48.752 Max Number of I/O Queues: 64 00:07:48.752 NVMe Specification Version (VS): 1.4 00:07:48.752 NVMe Specification Version (Identify): 1.4 00:07:48.752 Maximum Queue Entries: 2048 00:07:48.752 Contiguous Queues Required: Yes 00:07:48.752 Arbitration Mechanisms Supported 00:07:48.752 Weighted Round Robin: Not Supported 00:07:48.752 Vendor Specific: Not Supported 00:07:48.752 Reset Timeout: 7500 ms 00:07:48.752 Doorbell Stride: 4 bytes 00:07:48.752 NVM Subsystem Reset: Not Supported 00:07:48.752 Command Sets Supported 00:07:48.752 NVM Command Set: Supported 00:07:48.752 Boot Partition: Not Supported 00:07:48.752 Memory Page Size Minimum: 4096 bytes 00:07:48.752 Memory Page Size Maximum: 65536 bytes 00:07:48.752 Persistent Memory Region: Not Supported 00:07:48.752 Optional Asynchronous Events Supported 00:07:48.752 Namespace Attribute Notices: Supported 00:07:48.752 Firmware Activation Notices: Not Supported 00:07:48.752 ANA Change Notices: Not Supported 00:07:48.752 PLE Aggregate Log Change Notices: Not Supported 00:07:48.752 LBA Status Info Alert Notices: Not Supported 00:07:48.752 EGE Aggregate Log Change Notices: Not Supported 00:07:48.752 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.752 Zone Descriptor Change Notices: Not Supported 00:07:48.752 Discovery Log Change Notices: Not Supported 00:07:48.752 Controller Attributes 00:07:48.752 128-bit Host Identifier: Not Supported 00:07:48.752 Non-Operational Permissive Mode: Not Supported 00:07:48.752 NVM Sets: Not Supported 00:07:48.752 Read Recovery Levels: Not Supported 00:07:48.753 Endurance Groups: Not Supported 00:07:48.753 Predictable Latency Mode: Not Supported 00:07:48.753 Traffic Based Keep ALive: Not Supported 00:07:48.753 Namespace Granularity: Not Supported 00:07:48.753 SQ Associations: Not Supported 00:07:48.753 UUID List: Not Supported 00:07:48.753 Multi-Domain Subsystem: Not Supported 00:07:48.753 Fixed Capacity Management: Not Supported 00:07:48.753 Variable Capacity Management: Not Supported 00:07:48.753 Delete Endurance Group: Not Supported 00:07:48.753 Delete NVM Set: Not Supported 00:07:48.753 Extended LBA Formats Supported: Supported 00:07:48.753 Flexible Data Placement Supported: Not Supported 00:07:48.753 00:07:48.753 Controller Memory Buffer Support 00:07:48.753 ================================ 00:07:48.753 Supported: No 00:07:48.753 00:07:48.753 Persistent Memory Region Support 00:07:48.753 ================================ 00:07:48.753 Supported: No 00:07:48.753 00:07:48.753 Admin Command Set Attributes 00:07:48.753 ============================ 00:07:48.753 Security Send/Receive: Not Supported 00:07:48.753 Format NVM: Supported 00:07:48.753 Firmware Activate/Download: Not Supported 00:07:48.753 Namespace Management: Supported 00:07:48.753 Device Self-Test: Not Supported 00:07:48.753 Directives: Supported 00:07:48.753 NVMe-MI: Not Supported 00:07:48.753 Virtualization Management: Not Supported 00:07:48.753 Doorbell Buffer Config: Supported 00:07:48.753 Get LBA Status Capability: Not Supported 00:07:48.753 Command & Feature Lockdown Capability: Not Supported 00:07:48.753 Abort Command Limit: 4 00:07:48.753 Async Event Request Limit: 4 00:07:48.753 Number of Firmware Slots: N/A 00:07:48.753 Firmware Slot 1 Read-Only: N/A 00:07:48.753 Firmware Activation Without Reset: N/A 00:07:48.753 Multiple Update Detection Support: N/A 00:07:48.753 Firmware Update Granularity: No Information Provided 00:07:48.753 Per-Namespace SMART Log: Yes 00:07:48.753 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.753 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:48.753 Command Effects Log Page: Supported 00:07:48.753 Get Log Page Extended Data: Supported 00:07:48.753 Telemetry Log Pages: Not Supported 00:07:48.753 Persistent Event Log Pages: Not Supported 00:07:48.753 Supported Log Pages Log Page: May Support 00:07:48.753 Commands Supported & Effects Log Page: Not Supported 00:07:48.753 Feature Identifiers & Effects Log Page:May Support 00:07:48.753 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.753 Data Area 4 for Telemetry Log: Not Supported 00:07:48.753 Error Log Page Entries Supported: 1 00:07:48.753 Keep Alive: Not Supported 00:07:48.753 00:07:48.753 NVM Command Set Attributes 00:07:48.753 ========================== 00:07:48.753 Submission Queue Entry Size 00:07:48.753 Max: 64 00:07:48.753 Min: 64 00:07:48.753 Completion Queue Entry Size 00:07:48.753 Max: 16 00:07:48.753 Min: 16 00:07:48.753 Number of Namespaces: 256 00:07:48.753 Compare Command: Supported 00:07:48.753 Write Uncorrectable Command: Not Supported 00:07:48.753 Dataset Management Command: Supported 00:07:48.753 Write Zeroes Command: Supported 00:07:48.753 Set Features Save Field: Supported 00:07:48.753 Reservations: Not Supported 00:07:48.753 Timestamp: Supported 00:07:48.753 Copy: Supported 00:07:48.753 Volatile Write Cache: Present 00:07:48.753 Atomic Write Unit (Normal): 1 00:07:48.753 Atomic Write Unit (PFail): 1 00:07:48.753 Atomic Compare & Write Unit: 1 00:07:48.753 Fused Compare & Write: Not Supported 00:07:48.753 Scatter-Gather List 00:07:48.753 SGL Command Set: Supported 00:07:48.753 SGL Keyed: Not Supported 00:07:48.753 SGL Bit Bucket Descriptor: Not Supported 00:07:48.753 SGL Metadata Pointer: Not Supported 00:07:48.753 Oversized SGL: Not Supported 00:07:48.753 SGL Metadata Address: Not Supported 00:07:48.753 SGL Offset: Not Supported 00:07:48.753 Transport SGL Data Block: Not Supported 00:07:48.753 Replay Protected Memory Block: Not Supported 00:07:48.753 00:07:48.753 Firmware Slot Information 00:07:48.753 ========================= 00:07:48.753 Active slot: 1 00:07:48.753 Slot 1 Firmware Revision: 1.0 00:07:48.753 00:07:48.753 00:07:48.753 Commands Supported and Effects 00:07:48.753 ============================== 00:07:48.753 Admin Commands 00:07:48.753 -------------- 00:07:48.753 Delete I/O Submission Queue (00h): Supported 00:07:48.753 Create I/O Submission Queue (01h): Supported 00:07:48.753 Get Log Page (02h): Supported 00:07:48.753 Delete I/O Completion Queue (04h): Supported 00:07:48.753 Create I/O Completion Queue (05h): Supported 00:07:48.753 Identify (06h): Supported 00:07:48.753 Abort (08h): Supported 00:07:48.753 Set Features (09h): Supported 00:07:48.753 Get Features (0Ah): Supported 00:07:48.753 Asynchronous Event Request (0Ch): Supported 00:07:48.753 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.753 Directive Send (19h): Supported 00:07:48.753 Directive Receive (1Ah): Supported 00:07:48.753 Virtualization Management (1Ch): Supported 00:07:48.753 Doorbell Buffer Config (7Ch): Supported 00:07:48.753 Format NVM (80h): Supported LBA-Change 00:07:48.753 I/O Commands 00:07:48.753 ------------ 00:07:48.753 Flush (00h): Supported LBA-Change 00:07:48.753 Write (01h): Supported LBA-Change 00:07:48.753 Read (02h): Supported 00:07:48.753 Compare (05h): Supported 00:07:48.753 Write Zeroes (08h): Supported LBA-Change 00:07:48.753 Dataset Management (09h): Supported LBA-Change 00:07:48.753 Unknown (0Ch): Supported 00:07:48.753 Unknown (12h): Supported 00:07:48.753 Copy (19h): Supported LBA-Change 00:07:48.753 Unknown (1Dh): Supported LBA-Change 00:07:48.753 00:07:48.753 Error Log 00:07:48.753 ========= 00:07:48.753 00:07:48.753 Arbitration 00:07:48.753 =========== 00:07:48.753 Arbitration Burst: no limit 00:07:48.753 00:07:48.753 Power Management 00:07:48.753 ================ 00:07:48.753 Number of Power States: 1 00:07:48.753 Current Power State: Power State #0 00:07:48.753 Power State #0: 00:07:48.753 Max Power: 25.00 W 00:07:48.753 Non-Operational State: Operational 00:07:48.753 Entry Latency: 16 microseconds 00:07:48.753 Exit Latency: 4 microseconds 00:07:48.753 Relative Read Throughput: 0 00:07:48.753 Relative Read Latency: 0 00:07:48.753 Relative Write Throughput: 0 00:07:48.753 Relative Write Latency: 0 00:07:48.753 Idle Power: Not Reported 00:07:48.753 Active Power: Not Reported 00:07:48.753 Non-Operational Permissive Mode: Not Supported 00:07:48.753 00:07:48.753 Health Information 00:07:48.753 ================== 00:07:48.753 Critical Warnings: 00:07:48.753 Available Spare Space: OK 00:07:48.753 Temperature: OK 00:07:48.753 Device Reliability: OK 00:07:48.753 Read Only: No 00:07:48.753 Volatile Memory Backup: OK 00:07:48.753 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.753 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.753 Available Spare: 0% 00:07:48.753 Available Spare Threshold: 0% 00:07:48.753 Life Percentage Used: 0% 00:07:48.753 Data Units Read: 659 00:07:48.753 Data Units Written: 587 00:07:48.753 Host Read Commands: 35649 00:07:48.753 Host Write Commands: 35435 00:07:48.753 Controller Busy Time: 0 minutes 00:07:48.753 Power Cycles: 0 00:07:48.753 Power On Hours: 0 hours 00:07:48.753 Unsafe Shutdowns: 0 00:07:48.753 Unrecoverable Media Errors: 0 00:07:48.753 Lifetime Error Log Entries: 0 00:07:48.753 Warning Temperature Time: 0 minutes 00:07:48.753 Critical Temperature Time: 0 minutes 00:07:48.754 00:07:48.754 Number of Queues 00:07:48.754 ================ 00:07:48.754 Number of I/O Submission Queues: 64 00:07:48.754 Number of I/O Completion Queues: 64 00:07:48.754 00:07:48.754 ZNS Specific Controller Data 00:07:48.754 ============================ 00:07:48.754 Zone Append Size Limit: 0 00:07:48.754 00:07:48.754 00:07:48.754 Active Namespaces 00:07:48.754 ================= 00:07:48.754 Namespace ID:1 00:07:48.754 Error Recovery Timeout: Unlimited 00:07:48.754 Command Set Identifier: NVM (00h) 00:07:48.754 Deallocate: Supported 00:07:48.754 Deallocated/Unwritten Error: Supported 00:07:48.754 Deallocated Read Value: All 0x00 00:07:48.754 Deallocate in Write Zeroes: Not Supported 00:07:48.754 Deallocated Guard Field: 0xFFFF 00:07:48.754 Flush: Supported 00:07:48.754 Reservation: Not Supported 00:07:48.754 Metadata Transferred as: Separate Metadata Buffer 00:07:48.754 Namespace Sharing Capabilities: Private 00:07:48.754 Size (in LBAs): 1548666 (5GiB) 00:07:48.754 Capacity (in LBAs): 1548666 (5GiB) 00:07:48.754 Utilization (in LBAs): 1548666 (5GiB) 00:07:48.754 Thin Provisioning: Not Supported 00:07:48.754 Per-NS Atomic Units: No 00:07:48.754 Maximum Single Source Range Length: 128 00:07:48.754 Maximum Copy Length: 128 00:07:48.754 Maximum Source Range Count: 128 00:07:48.754 NGUID/EUI64 Never Reused: No 00:07:48.754 Namespace Write Protected: No 00:07:48.754 Number of LBA Formats: 8 00:07:48.754 Current LBA Format: LBA Format #07 00:07:48.754 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.754 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.754 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.754 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.754 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.754 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.754 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.754 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.754 00:07:48.754 NVM Specific Namespace Data 00:07:48.754 =========================== 00:07:48.754 Logical Block Storage Tag Mask: 0 00:07:48.754 Protection Information Capabilities: 00:07:48.754 16b Guard Protection Information Storage Tag Support: No 00:07:48.754 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.754 Storage Tag Check Read Support: No 00:07:48.754 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.754 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.754 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.754 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.754 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.754 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.754 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.754 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.754 ===================================================== 00:07:48.754 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.754 ===================================================== 00:07:48.754 Controller Capabilities/Features 00:07:48.754 ================================ 00:07:48.754 Vendor ID: 1b36 00:07:48.754 Subsystem Vendor ID: 1af4 00:07:48.754 Serial Number: 12341 00:07:48.754 Model Number: QEMU NVMe Ctrl 00:07:48.754 Firmware Version: 8.0.0 00:07:48.754 Recommended Arb Burst: 6 00:07:48.754 IEEE OUI Identifier: 00 54 52 00:07:48.754 Multi-path I/O 00:07:48.754 May have multiple subsystem ports: No 00:07:48.754 May have multiple controllers: No 00:07:48.754 Associated with SR-IOV VF: No 00:07:48.754 Max Data Transfer Size: 524288 00:07:48.754 Max Number of Namespaces: 256 00:07:48.754 Max Number of I/O Queues: 64 00:07:48.754 NVMe Specification Version (VS): 1.4 00:07:48.754 NVMe Specification Version (Identify): 1.4 00:07:48.754 Maximum Queue Entries: 2048 00:07:48.754 Contiguous Queues Required: Yes 00:07:48.754 Arbitration Mechanisms Supported 00:07:48.754 Weighted Round Robin: Not Supported 00:07:48.754 Vendor Specific: Not Supported 00:07:48.754 Reset Timeout: 7500 ms 00:07:48.754 Doorbell Stride: 4 bytes 00:07:48.754 NVM Subsystem Reset: Not Supported 00:07:48.754 Command Sets Supported 00:07:48.754 NVM Command Set: Supported 00:07:48.754 Boot Partition: Not Supported 00:07:48.754 Memory Page Size Minimum: 4096 bytes 00:07:48.754 Memory Page Size Maximum: 65536 bytes 00:07:48.754 Persistent Memory Region: Not Supported 00:07:48.754 Optional Asynchronous Events Supported 00:07:48.754 Namespace Attribute Notices: Supported 00:07:48.754 Firmware Activation Notices: Not Supported 00:07:48.754 ANA Change Notices: Not Supported 00:07:48.754 PLE Aggregate Log Change Notices: Not Supported 00:07:48.754 LBA Status Info Alert Notices: Not Supported 00:07:48.754 EGE Aggregate Log Change Notices: Not Supported 00:07:48.754 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.754 Zone Descriptor Change Notices: Not Supported 00:07:48.754 Discovery Log Change Notices: Not Supported 00:07:48.754 Controller Attributes 00:07:48.754 128-bit Host Identifier: Not Supported 00:07:48.754 Non-Operational Permissive Mode: Not Supported 00:07:48.754 NVM Sets: Not Supported 00:07:48.754 Read Recovery Levels: Not Supported 00:07:48.754 Endurance Groups: Not Supported 00:07:48.754 Predictable Latency Mode: Not Supported 00:07:48.754 Traffic Based Keep ALive: Not Supported 00:07:48.754 Namespace Granularity: Not Supported 00:07:48.754 SQ Associations: Not Supported 00:07:48.754 UUID List: Not Supported 00:07:48.754 Multi-Domain Subsystem: Not Supported 00:07:48.754 Fixed Capacity Management: Not Supported 00:07:48.754 Variable Capacity Management: Not Supported 00:07:48.754 Delete Endurance Group: Not Supported 00:07:48.754 Delete NVM Set: Not Supported 00:07:48.754 Extended LBA Formats Supported: Supported 00:07:48.754 Flexible Data Placement Supported: Not Supported 00:07:48.754 00:07:48.754 Controller Memory Buffer Support 00:07:48.754 ================================ 00:07:48.754 Supported: No 00:07:48.754 00:07:48.754 Persistent Memory Region Support 00:07:48.754 ================================ 00:07:48.754 Supported: No 00:07:48.754 00:07:48.754 Admin Command Set Attributes 00:07:48.754 ============================ 00:07:48.754 Security Send/Receive: Not Supported 00:07:48.754 Format NVM: Supported 00:07:48.754 Firmware Activate/Download: Not Supported 00:07:48.754 Namespace Management: Supported 00:07:48.754 Device Self-Test: Not Supported 00:07:48.754 Directives: Supported 00:07:48.754 NVMe-MI: Not Supported 00:07:48.754 Virtualization Management: Not Supported 00:07:48.754 Doorbell Buffer Config: Supported 00:07:48.754 Get LBA Status Capability: Not Supported 00:07:48.754 Command & Feature Lockdown Capability: Not Supported 00:07:48.754 Abort Command Limit: 4 00:07:48.754 Async Event Request Limit: 4 00:07:48.754 Number of Firmware Slots: N/A 00:07:48.754 Firmware Slot 1 Read-Only: N/A 00:07:48.754 Firmware Activation Without Reset: N/A 00:07:48.754 Multiple Update Detection Support: N/A 00:07:48.754 Firmware Update Granularity: No Information Provided 00:07:48.754 Per-Namespace SMART Log: Yes 00:07:48.754 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.754 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:48.754 Command Effects Log Page: Supported 00:07:48.754 Get Log Page Extended Data: Supported 00:07:48.754 Telemetry Log Pages: Not Supported 00:07:48.754 Persistent Event Log Pages: Not Supported 00:07:48.754 Supported Log Pages Log Page: May Support 00:07:48.754 Commands Supported & Effects Log Page: Not Supported 00:07:48.754 Feature Identifiers & Effects Log Page:May Support 00:07:48.754 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.754 Data Area 4 for Telemetry Log: Not Supported 00:07:48.754 Error Log Page Entries Supported: 1 00:07:48.755 Keep Alive: Not Supported 00:07:48.755 00:07:48.755 NVM Command Set Attributes 00:07:48.755 ========================== 00:07:48.755 Submission Queue Entry Size 00:07:48.755 Max: 64 00:07:48.755 Min: 64 00:07:48.755 Completion Queue Entry Size 00:07:48.755 Max: 16 00:07:48.755 Min: 16 00:07:48.755 Number of Namespaces: 256 00:07:48.755 Compare Command: Supported 00:07:48.755 Write Uncorrectable Command: Not Supported 00:07:48.755 Dataset Management Command: Supported 00:07:48.755 Write Zeroes Command: Supported 00:07:48.755 Set Features Save Field: Supported 00:07:48.755 Reservations: Not Supported 00:07:48.755 Timestamp: Supported 00:07:48.755 Copy: Supported 00:07:48.755 Volatile Write Cache: Present 00:07:48.755 Atomic Write Unit (Normal): 1 00:07:48.755 Atomic Write Unit (PFail): 1 00:07:48.755 Atomic Compare & Write Unit: 1 00:07:48.755 Fused Compare & Write: Not Supported 00:07:48.755 Scatter-Gather List 00:07:48.755 SGL Command Set: Supported 00:07:48.755 SGL Keyed: Not Supported 00:07:48.755 SGL Bit Bucket Descriptor: Not Supported 00:07:48.755 SGL Metadata Pointer: Not Supported 00:07:48.755 Oversized SGL: Not Supported 00:07:48.755 SGL Metadata Address: Not Supported 00:07:48.755 SGL Offset: Not Supported 00:07:48.755 Transport SGL Data Block: Not Supported 00:07:48.755 Replay Protected Memory Block: Not Supported 00:07:48.755 00:07:48.755 Firmware Slot Information 00:07:48.755 ========================= 00:07:48.755 Active slot: 1 00:07:48.755 Slot 1 Firmware Revision: 1.0 00:07:48.755 00:07:48.755 00:07:48.755 Commands Supported and Effects 00:07:48.755 ============================== 00:07:48.755 Admin Commands 00:07:48.755 -------------- 00:07:48.755 Delete I/O Submission Queue (00h): Supported 00:07:48.755 Create I/O Submission Queue (01h): Supported 00:07:48.755 Get Log Page (02h): Supported 00:07:48.755 Delete I/O Completion Queue (04h): Supported 00:07:48.755 Create I/O Completion Queue (05h): Supported 00:07:48.755 Identify (06h): Supported 00:07:48.755 Abort (08h): Supported 00:07:48.755 Set Features (09h): Supported 00:07:48.755 Get Features (0Ah): Supported 00:07:48.755 Asynchronous Event Request (0Ch): Supported 00:07:48.755 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.755 Directive Send (19h): Supported 00:07:48.755 Directive Receive (1Ah): Supported 00:07:48.755 Virtualization Management (1Ch): Supported 00:07:48.755 Doorbell Buffer Config (7Ch): Supported 00:07:48.755 Format NVM (80h): Supported LBA-Change 00:07:48.755 I/O Commands 00:07:48.755 ------------ 00:07:48.755 Flush (00h): Supported LBA-Change 00:07:48.755 Write (01h): Supported LBA-Change 00:07:48.755 Read (02h): Supported 00:07:48.755 Compare (05h): Supported 00:07:48.755 Write Zeroes (08h): Supported LBA-Change 00:07:48.755 Dataset Management (09h): Supported LBA-Change 00:07:48.755 Unknown (0Ch): Supported 00:07:48.755 Unknown (12h): Supported 00:07:48.755 Copy (19h): Supported LBA-Change 00:07:48.755 Unknown (1Dh): Supported LBA-Change 00:07:48.755 00:07:48.755 Error Log 00:07:48.755 ========= 00:07:48.755 00:07:48.755 Arbitration 00:07:48.755 =========== 00:07:48.755 Arbitration Burst: no limit 00:07:48.755 00:07:48.755 Power Management 00:07:48.755 ================ 00:07:48.755 Number of Power States: 1 00:07:48.755 Current Power State: Power State #0 00:07:48.755 Power State #0: 00:07:48.755 Max Power: 25.00 W 00:07:48.755 Non-Operational State: Operational 00:07:48.755 Entry Latency: 16 microseconds 00:07:48.755 Exit Latency: 4 microseconds 00:07:48.755 Relative Read Throughput: 0 00:07:48.755 Relative Read Latency: 0 00:07:48.755 Relative Write Throughput: 0 00:07:48.755 Relative Write Latency: 0 00:07:48.755 Idle Power: Not Reported 00:07:48.755 Active Power: Not Reported 00:07:48.755 Non-Operational Permissive Mode: Not Supported 00:07:48.755 00:07:48.755 Health Information 00:07:48.755 ================== 00:07:48.755 Critical Warnings: 00:07:48.755 Available Spare Space: OK 00:07:48.755 Temperature: OK 00:07:48.755 Device Reliability: OK 00:07:48.755 Read Only: No 00:07:48.755 Volatile Memory Backup: OK 00:07:48.755 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.755 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.755 Available Spare: 0% 00:07:48.755 Available Spare Threshold: 0% 00:07:48.755 Life Percentage Used: 0% 00:07:48.755 Data Units Read: 1037 00:07:48.755 Data Units Written: 904 00:07:48.755 Host Read Commands: 54950 00:07:48.755 Host Write Commands: 53751 00:07:48.755 Controller Busy Time: 0 minutes 00:07:48.755 Power Cycles: 0 00:07:48.755 Power On Hours: 0 hours 00:07:48.755 Unsafe Shutdowns: 0 00:07:48.755 Unrecoverable Media Errors: 0 00:07:48.755 Lifetime Error Log Entries: 0 00:07:48.755 Warning Temperature Time: 0 minutes 00:07:48.755 Critical Temperature Time: 0 minutes 00:07:48.755 00:07:48.755 Number of Queues 00:07:48.755 ================ 00:07:48.755 Number of I/O Submission Queues: 64 00:07:48.755 Number of I/O Completion Queues: 64 00:07:48.755 00:07:48.755 ZNS Specific Controller Data 00:07:48.755 ============================ 00:07:48.755 Zone Append Size Limit: 0 00:07:48.755 00:07:48.755 00:07:48.755 Active Namespaces 00:07:48.755 ================= 00:07:48.755 Namespace ID:1 00:07:48.755 Error Recovery Timeout: Unlimited 00:07:48.755 Command Set Identifier: NVM (00h) 00:07:48.755 Deallocate: Supported 00:07:48.755 Deallocated/Unwritten Error: Supported 00:07:48.755 Deallocated Read Value: All 0x00 00:07:48.755 Deallocate in Write Zeroes: Not Supported 00:07:48.755 Deallocated Guard Field: 0xFFFF 00:07:48.755 Flush: Supported 00:07:48.755 Reservation: Not Supported 00:07:48.755 Namespace Sharing Capabilities: Private 00:07:48.755 Size (in LBAs): 1310720 (5GiB) 00:07:48.755 Capacity (in LBAs): 1310720 (5GiB) 00:07:48.755 Utilization (in LBAs): 1310720 (5GiB) 00:07:48.755 Thin Provisioning: Not Supported 00:07:48.755 Per-NS Atomic Units: No 00:07:48.755 Maximum Single Source Range Length: 128 00:07:48.755 Maximum Copy Length: 128 00:07:48.755 Maximum Source Range Count: 128 00:07:48.755 NGUID/EUI64 Never Reused: No 00:07:48.755 Namespace Write Protected: No 00:07:48.755 Number of LBA Formats: 8 00:07:48.755 Current LBA Format: LBA Format #04 00:07:48.755 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.755 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.755 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.755 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.755 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.755 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.755 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.755 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.755 00:07:48.755 NVM Specific Namespace Data 00:07:48.755 =========================== 00:07:48.755 Logical Block Storage Tag Mask: 0 00:07:48.755 Protection Information Capabilities: 00:07:48.755 16b Guard Protection Information Storage Tag Support: No 00:07:48.755 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.755 Storage Tag Check Read Support: No 00:07:48.755 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.755 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.755 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.755 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.755 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.755 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.755 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.755 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.755 ===================================================== 00:07:48.755 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.755 ===================================================== 00:07:48.755 Controller Capabilities/Features 00:07:48.755 ================================ 00:07:48.755 Vendor ID: 1b36 00:07:48.755 Subsystem Vendor ID: 1af4 00:07:48.755 Serial Number: 12343 00:07:48.755 Model Number: QEMU NVMe Ctrl 00:07:48.755 Firmware Version: 8.0.0 00:07:48.755 Recommended Arb Burst: 6 00:07:48.755 IEEE OUI Identifier: 00 54 52 00:07:48.755 Multi-path I/O 00:07:48.755 May have multiple subsystem ports: No 00:07:48.755 May have multiple controllers: Yes 00:07:48.755 Associated with SR-IOV VF: No 00:07:48.755 Max Data Transfer Size: 524288 00:07:48.755 Max Number of Namespaces: 256 00:07:48.755 Max Number of I/O Queues: 64 00:07:48.755 NVMe Specification Version (VS): 1.4 00:07:48.755 NVMe Specification Version (Identify): 1.4 00:07:48.755 Maximum Queue Entries: 2048 00:07:48.755 Contiguous Queues Required: Yes 00:07:48.755 Arbitration Mechanisms Supported 00:07:48.755 Weighted Round Robin: Not Supported 00:07:48.755 Vendor Specific: Not Supported 00:07:48.755 Reset Timeout: 7500 ms 00:07:48.755 Doorbell Stride: 4 bytes 00:07:48.756 NVM Subsystem Reset: Not Supported 00:07:48.756 Command Sets Supported 00:07:48.756 NVM Command Set: Supported 00:07:48.756 Boot Partition: Not Supported 00:07:48.756 Memory Page Size Minimum: 4096 bytes 00:07:48.756 Memory Page Size Maximum: 65536 bytes 00:07:48.756 Persistent Memory Region: Not Supported 00:07:48.756 Optional Asynchronous Events Supported 00:07:48.756 Namespace Attribute Notices: Supported 00:07:48.756 Firmware Activation Notices: Not Supported 00:07:48.756 ANA Change Notices: Not Supported 00:07:48.756 PLE Aggregate Log Change Notices: Not Supported 00:07:48.756 LBA Status Info Alert Notices: Not Supported 00:07:48.756 EGE Aggregate Log Change Notices: Not Supported 00:07:48.756 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.756 Zone Descriptor Change Notices: Not Supported 00:07:48.756 Discovery Log Change Notices: Not Supported 00:07:48.756 Controller Attributes 00:07:48.756 128-bit Host Identifier: Not Supported 00:07:48.756 Non-Operational Permissive Mode: Not Supported 00:07:48.756 NVM Sets: Not Supported 00:07:48.756 Read Recovery Levels: Not Supported 00:07:48.756 Endurance Groups: Supported 00:07:48.756 Predictable Latency Mode: Not Supported 00:07:48.756 Traffic Based Keep ALive: Not Supported 00:07:48.756 Namespace Granularity: Not Supported 00:07:48.756 SQ Associations: Not Supported 00:07:48.756 UUID List: Not Supported 00:07:48.756 Multi-Domain Subsystem: Not Supported 00:07:48.756 Fixed Capacity Management: Not Supported 00:07:48.756 Variable Capacity Management: Not Supported 00:07:48.756 Delete Endurance Group: Not Supported 00:07:48.756 Delete NVM Set: Not Supported 00:07:48.756 Extended LBA Formats Supported: Supported 00:07:48.756 Flexible Data Placement Supported: Supported 00:07:48.756 00:07:48.756 Controller Memory Buffer Support 00:07:48.756 ================================ 00:07:48.756 Supported: No 00:07:48.756 00:07:48.756 Persistent Memory Region Support 00:07:48.756 ================================ 00:07:48.756 Supported: No 00:07:48.756 00:07:48.756 Admin Command Set Attributes 00:07:48.756 ============================ 00:07:48.756 Security Send/Receive: Not Supported 00:07:48.756 Format NVM: Supported 00:07:48.756 Firmware Activate/Download: Not Supported 00:07:48.756 Namespace Management: Supported 00:07:48.756 Device Self-Test: Not Supported 00:07:48.756 Directives: Supported 00:07:48.756 NVMe-MI: Not Supported 00:07:48.756 Virtualization Management: Not Supported 00:07:48.756 Doorbell Buffer Config: Supported 00:07:48.756 Get LBA Status Capability: Not Supported 00:07:48.756 Command & Feature Lockdown Capability: Not Supported 00:07:48.756 Abort Command Limit: 4 00:07:48.756 Async Event Request Limit: 4 00:07:48.756 Number of Firmware Slots: N/A 00:07:48.756 Firmware Slot 1 Read-Only: N/A 00:07:48.756 Firmware Activation Without Reset: N/A 00:07:48.756 Multiple Update Detection Support: N/A 00:07:48.756 Firmware Update Granularity: No Information Provided 00:07:48.756 Per-Namespace SMART Log: Yes 00:07:48.756 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.756 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:48.756 Command Effects Log Page: Supported 00:07:48.756 Get Log Page Extended Data: Supported 00:07:48.756 Telemetry Log Pages: Not Supported 00:07:48.756 Persistent Event Log Pages: Not Supported 00:07:48.756 Supported Log Pages Log Page: May Support 00:07:48.756 Commands Supported & Effects Log Page: Not Supported 00:07:48.756 Feature Identifiers & Effects Log Page:May Support 00:07:48.756 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.756 Data Area 4 for Telemetry Log: Not Supported 00:07:48.756 Error Log Page Entries Supported: 1 00:07:48.756 Keep Alive: Not Supported 00:07:48.756 00:07:48.756 NVM Command Set Attributes 00:07:48.756 ========================== 00:07:48.756 Submission Queue Entry Size 00:07:48.756 Max: 64 00:07:48.756 Min: 64 00:07:48.756 Completion Queue Entry Size 00:07:48.756 Max: 16 00:07:48.756 Min: 16 00:07:48.756 Number of Namespaces: 256 00:07:48.756 Compare Command: Supported 00:07:48.756 Write Uncorrectable Command: Not Supported 00:07:48.756 Dataset Management Command: Supported 00:07:48.756 Write Zeroes Command: Supported 00:07:48.756 Set Features Save Field: Supported 00:07:48.756 Reservations: Not Supported 00:07:48.756 Timestamp: Supported 00:07:48.756 Copy: Supported 00:07:48.756 Volatile Write Cache: Present 00:07:48.756 Atomic Write Unit (Normal): 1 00:07:48.756 Atomic Write Unit (PFail): 1 00:07:48.756 Atomic Compare & Write Unit: 1 00:07:48.756 Fused Compare & Write: Not Supported 00:07:48.756 Scatter-Gather List 00:07:48.756 SGL Command Set: Supported 00:07:48.756 SGL Keyed: Not Supported 00:07:48.756 SGL Bit Bucket Descriptor: Not Supported 00:07:48.756 SGL Metadata Pointer: Not Supported 00:07:48.756 Oversized SGL: Not Supported 00:07:48.756 SGL Metadata Address: Not Supported 00:07:48.756 SGL Offset: Not Supported 00:07:48.756 Transport SGL Data Block: Not Supported 00:07:48.756 Replay Protected Memory Block: Not Supported 00:07:48.756 00:07:48.756 Firmware Slot Information 00:07:48.756 ========================= 00:07:48.756 Active slot: 1 00:07:48.756 Slot 1 Firmware Revision: 1.0 00:07:48.756 00:07:48.756 00:07:48.756 Commands Supported and Effects 00:07:48.756 ============================== 00:07:48.756 Admin Commands 00:07:48.756 -------------- 00:07:48.756 Delete I/O Submission Queue (00h): Supported 00:07:48.756 Create I/O Submission Queue (01h): Supported 00:07:48.756 Get Log Page (02h): Supported 00:07:48.756 Delete I/O Completion Queue (04h): Supported 00:07:48.756 Create I/O Completion Queue (05h): Supported 00:07:48.756 Identify (06h): Supported 00:07:48.756 Abort (08h): Supported 00:07:48.756 Set Features (09h): Supported 00:07:48.756 Get Features (0Ah): Supported 00:07:48.756 Asynchronous Event Request (0Ch): Supported 00:07:48.756 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.756 Directive Send (19h): Supported 00:07:48.756 Directive Receive (1Ah): Supported 00:07:48.756 Virtualization Management (1Ch): Supported 00:07:48.756 Doorbell Buffer Config (7Ch): Supported 00:07:48.756 Format NVM (80h): Supported LBA-Change 00:07:48.756 I/O Commands 00:07:48.756 ------------ 00:07:48.756 Flush (00h): Supported LBA-Change 00:07:48.756 Write (01h): Supported LBA-Change 00:07:48.756 Read (02h): Supported 00:07:48.756 Compare (05h): Supported 00:07:48.756 Write Zeroes (08h): Supported LBA-Change 00:07:48.756 Dataset Management (09h): Supported LBA-Change 00:07:48.756 Unknown (0Ch): Supported 00:07:48.756 Unknown (12h): Supported 00:07:48.756 Copy (19h): Supported LBA-Change 00:07:48.756 Unknown (1Dh): Supported LBA-Change 00:07:48.756 00:07:48.756 Error Log 00:07:48.756 ========= 00:07:48.756 00:07:48.756 Arbitration 00:07:48.756 =========== 00:07:48.756 Arbitration Burst: no limit 00:07:48.756 00:07:48.756 Power Management 00:07:48.756 ================ 00:07:48.756 Number of Power States: 1 00:07:48.756 Current Power State: Power State #0 00:07:48.756 Power State #0: 00:07:48.756 Max Power: 25.00 W 00:07:48.756 Non-Operational State: Operational 00:07:48.756 Entry Latency: 16 microseconds 00:07:48.756 Exit Latency: 4 microseconds 00:07:48.756 Relative Read Throughput: 0 00:07:48.756 Relative Read Latency: 0 00:07:48.756 Relative Write Throughput: 0 00:07:48.756 Relative Write Latency: 0 00:07:48.756 Idle Power: Not Reported 00:07:48.756 Active Power: Not Reported 00:07:48.756 Non-Operational Permissive Mode: Not Supported 00:07:48.756 00:07:48.756 Health Information 00:07:48.756 ================== 00:07:48.756 Critical Warnings: 00:07:48.756 Available Spare Space: OK 00:07:48.756 Temperature: OK 00:07:48.756 Device Reliability: OK 00:07:48.756 Read Only: No 00:07:48.756 Volatile Memory Backup: OK 00:07:48.756 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.756 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.756 Available Spare: 0% 00:07:48.756 Available Spare Threshold: 0% 00:07:48.756 Life Percentage Used: ted unexpected 00:07:48.756 [2024-12-09 14:00:50.402170] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62885 terminated unexpected 00:07:48.756 [2024-12-09 14:00:50.402571] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62885 terminated unexpected 00:07:48.756 [2024-12-09 14:00:50.403213] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62885 terminated unexpected 00:07:48.756 0% 00:07:48.756 Data Units Read: 816 00:07:48.756 Data Units Written: 745 00:07:48.756 Host Read Commands: 37235 00:07:48.756 Host Write Commands: 36658 00:07:48.756 Controller Busy Time: 0 minutes 00:07:48.756 Power Cycles: 0 00:07:48.756 Power On Hours: 0 hours 00:07:48.756 Unsafe Shutdowns: 0 00:07:48.756 Unrecoverable Media Errors: 0 00:07:48.756 Lifetime Error Log Entries: 0 00:07:48.756 Warning Temperature Time: 0 minutes 00:07:48.756 Critical Temperature Time: 0 minutes 00:07:48.756 00:07:48.756 Number of Queues 00:07:48.757 ================ 00:07:48.757 Number of I/O Submission Queues: 64 00:07:48.757 Number of I/O Completion Queues: 64 00:07:48.757 00:07:48.757 ZNS Specific Controller Data 00:07:48.757 ============================ 00:07:48.757 Zone Append Size Limit: 0 00:07:48.757 00:07:48.757 00:07:48.757 Active Namespaces 00:07:48.757 ================= 00:07:48.757 Namespace ID:1 00:07:48.757 Error Recovery Timeout: Unlimited 00:07:48.757 Command Set Identifier: NVM (00h) 00:07:48.757 Deallocate: Supported 00:07:48.757 Deallocated/Unwritten Error: Supported 00:07:48.757 Deallocated Read Value: All 0x00 00:07:48.757 Deallocate in Write Zeroes: Not Supported 00:07:48.757 Deallocated Guard Field: 0xFFFF 00:07:48.757 Flush: Supported 00:07:48.757 Reservation: Not Supported 00:07:48.757 Namespace Sharing Capabilities: Multiple Controllers 00:07:48.757 Size (in LBAs): 262144 (1GiB) 00:07:48.757 Capacity (in LBAs): 262144 (1GiB) 00:07:48.757 Utilization (in LBAs): 262144 (1GiB) 00:07:48.757 Thin Provisioning: Not Supported 00:07:48.757 Per-NS Atomic Units: No 00:07:48.757 Maximum Single Source Range Length: 128 00:07:48.757 Maximum Copy Length: 128 00:07:48.757 Maximum Source Range Count: 128 00:07:48.757 NGUID/EUI64 Never Reused: No 00:07:48.757 Namespace Write Protected: No 00:07:48.757 Endurance group ID: 1 00:07:48.757 Number of LBA Formats: 8 00:07:48.757 Current LBA Format: LBA Format #04 00:07:48.757 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.757 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.757 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.757 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.757 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.757 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.757 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.757 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.757 00:07:48.757 Get Feature FDP: 00:07:48.757 ================ 00:07:48.757 Enabled: Yes 00:07:48.757 FDP configuration index: 0 00:07:48.757 00:07:48.757 FDP configurations log page 00:07:48.757 =========================== 00:07:48.757 Number of FDP configurations: 1 00:07:48.757 Version: 0 00:07:48.757 Size: 112 00:07:48.757 FDP Configuration Descriptor: 0 00:07:48.757 Descriptor Size: 96 00:07:48.757 Reclaim Group Identifier format: 2 00:07:48.757 FDP Volatile Write Cache: Not Present 00:07:48.757 FDP Configuration: Valid 00:07:48.757 Vendor Specific Size: 0 00:07:48.757 Number of Reclaim Groups: 2 00:07:48.757 Number of Recalim Unit Handles: 8 00:07:48.757 Max Placement Identifiers: 128 00:07:48.757 Number of Namespaces Suppprted: 256 00:07:48.757 Reclaim unit Nominal Size: 6000000 bytes 00:07:48.757 Estimated Reclaim Unit Time Limit: Not Reported 00:07:48.757 RUH Desc #000: RUH Type: Initially Isolated 00:07:48.757 RUH Desc #001: RUH Type: Initially Isolated 00:07:48.757 RUH Desc #002: RUH Type: Initially Isolated 00:07:48.757 RUH Desc #003: RUH Type: Initially Isolated 00:07:48.757 RUH Desc #004: RUH Type: Initially Isolated 00:07:48.757 RUH Desc #005: RUH Type: Initially Isolated 00:07:48.757 RUH Desc #006: RUH Type: Initially Isolated 00:07:48.757 RUH Desc #007: RUH Type: Initially Isolated 00:07:48.757 00:07:48.757 FDP reclaim unit handle usage log page 00:07:48.757 ====================================== 00:07:48.757 Number of Reclaim Unit Handles: 8 00:07:48.757 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:48.757 RUH Usage Desc #001: RUH Attributes: Unused 00:07:48.757 RUH Usage Desc #002: RUH Attributes: Unused 00:07:48.757 RUH Usage Desc #003: RUH Attributes: Unused 00:07:48.757 RUH Usage Desc #004: RUH Attributes: Unused 00:07:48.757 RUH Usage Desc #005: RUH Attributes: Unused 00:07:48.757 RUH Usage Desc #006: RUH Attributes: Unused 00:07:48.757 RUH Usage Desc #007: RUH Attributes: Unused 00:07:48.757 00:07:48.757 FDP statistics log page 00:07:48.757 ======================= 00:07:48.757 Host bytes with metadata written: 472563712 00:07:48.757 Media bytes with metadata written: 472629248 00:07:48.757 Media bytes erased: 0 00:07:48.757 00:07:48.757 FDP events log page 00:07:48.757 =================== 00:07:48.757 Number of FDP events: 0 00:07:48.757 00:07:48.757 NVM Specific Namespace Data 00:07:48.757 =========================== 00:07:48.757 Logical Block Storage Tag Mask: 0 00:07:48.757 Protection Information Capabilities: 00:07:48.757 16b Guard Protection Information Storage Tag Support: No 00:07:48.757 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.757 Storage Tag Check Read Support: No 00:07:48.757 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.757 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.757 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.757 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.757 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.757 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.757 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.757 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.757 ===================================================== 00:07:48.757 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.757 ===================================================== 00:07:48.757 Controller Capabilities/Features 00:07:48.757 ================================ 00:07:48.757 Vendor ID: 1b36 00:07:48.757 Subsystem Vendor ID: 1af4 00:07:48.757 Serial Number: 12342 00:07:48.757 Model Number: QEMU NVMe Ctrl 00:07:48.757 Firmware Version: 8.0.0 00:07:48.757 Recommended Arb Burst: 6 00:07:48.757 IEEE OUI Identifier: 00 54 52 00:07:48.757 Multi-path I/O 00:07:48.757 May have multiple subsystem ports: No 00:07:48.757 May have multiple controllers: No 00:07:48.757 Associated with SR-IOV VF: No 00:07:48.757 Max Data Transfer Size: 524288 00:07:48.757 Max Number of Namespaces: 256 00:07:48.757 Max Number of I/O Queues: 64 00:07:48.757 NVMe Specification Version (VS): 1.4 00:07:48.757 NVMe Specification Version (Identify): 1.4 00:07:48.757 Maximum Queue Entries: 2048 00:07:48.757 Contiguous Queues Required: Yes 00:07:48.757 Arbitration Mechanisms Supported 00:07:48.757 Weighted Round Robin: Not Supported 00:07:48.757 Vendor Specific: Not Supported 00:07:48.757 Reset Timeout: 7500 ms 00:07:48.757 Doorbell Stride: 4 bytes 00:07:48.757 NVM Subsystem Reset: Not Supported 00:07:48.757 Command Sets Supported 00:07:48.757 NVM Command Set: Supported 00:07:48.757 Boot Partition: Not Supported 00:07:48.757 Memory Page Size Minimum: 4096 bytes 00:07:48.757 Memory Page Size Maximum: 65536 bytes 00:07:48.757 Persistent Memory Region: Not Supported 00:07:48.757 Optional Asynchronous Events Supported 00:07:48.757 Namespace Attribute Notices: Supported 00:07:48.757 Firmware Activation Notices: Not Supported 00:07:48.757 ANA Change Notices: Not Supported 00:07:48.757 PLE Aggregate Log Change Notices: Not Supported 00:07:48.757 LBA Status Info Alert Notices: Not Supported 00:07:48.757 EGE Aggregate Log Change Notices: Not Supported 00:07:48.757 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.757 Zone Descriptor Change Notices: Not Supported 00:07:48.757 Discovery Log Change Notices: Not Supported 00:07:48.757 Controller Attributes 00:07:48.757 128-bit Host Identifier: Not Supported 00:07:48.757 Non-Operational Permissive Mode: Not Supported 00:07:48.757 NVM Sets: Not Supported 00:07:48.757 Read Recovery Levels: Not Supported 00:07:48.757 Endurance Groups: Not Supported 00:07:48.757 Predictable Latency Mode: Not Supported 00:07:48.757 Traffic Based Keep ALive: Not Supported 00:07:48.757 Namespace Granularity: Not Supported 00:07:48.758 SQ Associations: Not Supported 00:07:48.758 UUID List: Not Supported 00:07:48.758 Multi-Domain Subsystem: Not Supported 00:07:48.758 Fixed Capacity Management: Not Supported 00:07:48.758 Variable Capacity Management: Not Supported 00:07:48.758 Delete Endurance Group: Not Supported 00:07:48.758 Delete NVM Set: Not Supported 00:07:48.758 Extended LBA Formats Supported: Supported 00:07:48.758 Flexible Data Placement Supported: Not Supported 00:07:48.758 00:07:48.758 Controller Memory Buffer Support 00:07:48.758 ================================ 00:07:48.758 Supported: No 00:07:48.758 00:07:48.758 Persistent Memory Region Support 00:07:48.758 ================================ 00:07:48.758 Supported: No 00:07:48.758 00:07:48.758 Admin Command Set Attributes 00:07:48.758 ============================ 00:07:48.758 Security Send/Receive: Not Supported 00:07:48.758 Format NVM: Supported 00:07:48.758 Firmware Activate/Download: Not Supported 00:07:48.758 Namespace Management: Supported 00:07:48.758 Device Self-Test: Not Supported 00:07:48.758 Directives: Supported 00:07:48.758 NVMe-MI: Not Supported 00:07:48.758 Virtualization Management: Not Supported 00:07:48.758 Doorbell Buffer Config: Supported 00:07:48.758 Get LBA Status Capability: Not Supported 00:07:48.758 Command & Feature Lockdown Capability: Not Supported 00:07:48.758 Abort Command Limit: 4 00:07:48.758 Async Event Request Limit: 4 00:07:48.758 Number of Firmware Slots: N/A 00:07:48.758 Firmware Slot 1 Read-Only: N/A 00:07:48.758 Firmware Activation Without Reset: N/A 00:07:48.758 Multiple Update Detection Support: N/A 00:07:48.758 Firmware Update Granularity: No Information Provided 00:07:48.758 Per-Namespace SMART Log: Yes 00:07:48.758 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.758 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:48.758 Command Effects Log Page: Supported 00:07:48.758 Get Log Page Extended Data: Supported 00:07:48.758 Telemetry Log Pages: Not Supported 00:07:48.758 Persistent Event Log Pages: Not Supported 00:07:48.758 Supported Log Pages Log Page: May Support 00:07:48.758 Commands Supported & Effects Log Page: Not Supported 00:07:48.758 Feature Identifiers & Effects Log Page:May Support 00:07:48.758 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.758 Data Area 4 for Telemetry Log: Not Supported 00:07:48.758 Error Log Page Entries Supported: 1 00:07:48.758 Keep Alive: Not Supported 00:07:48.758 00:07:48.758 NVM Command Set Attributes 00:07:48.758 ========================== 00:07:48.758 Submission Queue Entry Size 00:07:48.758 Max: 64 00:07:48.758 Min: 64 00:07:48.758 Completion Queue Entry Size 00:07:48.758 Max: 16 00:07:48.758 Min: 16 00:07:48.758 Number of Namespaces: 256 00:07:48.758 Compare Command: Supported 00:07:48.758 Write Uncorrectable Command: Not Supported 00:07:48.758 Dataset Management Command: Supported 00:07:48.758 Write Zeroes Command: Supported 00:07:48.758 Set Features Save Field: Supported 00:07:48.758 Reservations: Not Supported 00:07:48.758 Timestamp: Supported 00:07:48.758 Copy: Supported 00:07:48.758 Volatile Write Cache: Present 00:07:48.758 Atomic Write Unit (Normal): 1 00:07:48.758 Atomic Write Unit (PFail): 1 00:07:48.758 Atomic Compare & Write Unit: 1 00:07:48.758 Fused Compare & Write: Not Supported 00:07:48.758 Scatter-Gather List 00:07:48.758 SGL Command Set: Supported 00:07:48.758 SGL Keyed: Not Supported 00:07:48.758 SGL Bit Bucket Descriptor: Not Supported 00:07:48.758 SGL Metadata Pointer: Not Supported 00:07:48.758 Oversized SGL: Not Supported 00:07:48.758 SGL Metadata Address: Not Supported 00:07:48.758 SGL Offset: Not Supported 00:07:48.758 Transport SGL Data Block: Not Supported 00:07:48.758 Replay Protected Memory Block: Not Supported 00:07:48.758 00:07:48.758 Firmware Slot Information 00:07:48.758 ========================= 00:07:48.758 Active slot: 1 00:07:48.758 Slot 1 Firmware Revision: 1.0 00:07:48.758 00:07:48.758 00:07:48.758 Commands Supported and Effects 00:07:48.758 ============================== 00:07:48.758 Admin Commands 00:07:48.758 -------------- 00:07:48.758 Delete I/O Submission Queue (00h): Supported 00:07:48.758 Create I/O Submission Queue (01h): Supported 00:07:48.758 Get Log Page (02h): Supported 00:07:48.758 Delete I/O Completion Queue (04h): Supported 00:07:48.758 Create I/O Completion Queue (05h): Supported 00:07:48.758 Identify (06h): Supported 00:07:48.758 Abort (08h): Supported 00:07:48.758 Set Features (09h): Supported 00:07:48.758 Get Features (0Ah): Supported 00:07:48.758 Asynchronous Event Request (0Ch): Supported 00:07:48.758 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.758 Directive Send (19h): Supported 00:07:48.758 Directive Receive (1Ah): Supported 00:07:48.758 Virtualization Management (1Ch): Supported 00:07:48.758 Doorbell Buffer Config (7Ch): Supported 00:07:48.758 Format NVM (80h): Supported LBA-Change 00:07:48.758 I/O Commands 00:07:48.758 ------------ 00:07:48.758 Flush (00h): Supported LBA-Change 00:07:48.758 Write (01h): Supported LBA-Change 00:07:48.758 Read (02h): Supported 00:07:48.758 Compare (05h): Supported 00:07:48.758 Write Zeroes (08h): Supported LBA-Change 00:07:48.758 Dataset Management (09h): Supported LBA-Change 00:07:48.758 Unknown (0Ch): Supported 00:07:48.758 Unknown (12h): Supported 00:07:48.758 Copy (19h): Supported LBA-Change 00:07:48.758 Unknown (1Dh): Supported LBA-Change 00:07:48.758 00:07:48.758 Error Log 00:07:48.758 ========= 00:07:48.758 00:07:48.758 Arbitration 00:07:48.758 =========== 00:07:48.758 Arbitration Burst: no limit 00:07:48.758 00:07:48.758 Power Management 00:07:48.758 ================ 00:07:48.758 Number of Power States: 1 00:07:48.758 Current Power State: Power State #0 00:07:48.758 Power State #0: 00:07:48.758 Max Power: 25.00 W 00:07:48.758 Non-Operational State: Operational 00:07:48.758 Entry Latency: 16 microseconds 00:07:48.758 Exit Latency: 4 microseconds 00:07:48.758 Relative Read Throughput: 0 00:07:48.758 Relative Read Latency: 0 00:07:48.758 Relative Write Throughput: 0 00:07:48.758 Relative Write Latency: 0 00:07:48.758 Idle Power: Not Reported 00:07:48.758 Active Power: Not Reported 00:07:48.758 Non-Operational Permissive Mode: Not Supported 00:07:48.758 00:07:48.758 Health Information 00:07:48.758 ================== 00:07:48.758 Critical Warnings: 00:07:48.758 Available Spare Space: OK 00:07:48.758 Temperature: OK 00:07:48.758 Device Reliability: OK 00:07:48.758 Read Only: No 00:07:48.758 Volatile Memory Backup: OK 00:07:48.758 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.758 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.758 Available Spare: 0% 00:07:48.758 Available Spare Threshold: 0% 00:07:48.758 Life Percentage Used: 0% 00:07:48.758 Data Units Read: 2107 00:07:48.758 Data Units Written: 1894 00:07:48.758 Host Read Commands: 108683 00:07:48.758 Host Write Commands: 106954 00:07:48.758 Controller Busy Time: 0 minutes 00:07:48.758 Power Cycles: 0 00:07:48.758 Power On Hours: 0 hours 00:07:48.758 Unsafe Shutdowns: 0 00:07:48.758 Unrecoverable Media Errors: 0 00:07:48.758 Lifetime Error Log Entries: 0 00:07:48.758 Warning Temperature Time: 0 minutes 00:07:48.758 Critical Temperature Time: 0 minutes 00:07:48.758 00:07:48.758 Number of Queues 00:07:48.758 ================ 00:07:48.758 Number of I/O Submission Queues: 64 00:07:48.758 Number of I/O Completion Queues: 64 00:07:48.758 00:07:48.758 ZNS Specific Controller Data 00:07:48.758 ============================ 00:07:48.758 Zone Append Size Limit: 0 00:07:48.758 00:07:48.758 00:07:48.758 Active Namespaces 00:07:48.758 ================= 00:07:48.758 Namespace ID:1 00:07:48.758 Error Recovery Timeout: Unlimited 00:07:48.758 Command Set Identifier: NVM (00h) 00:07:48.758 Deallocate: Supported 00:07:48.758 Deallocated/Unwritten Error: Supported 00:07:48.758 Deallocated Read Value: All 0x00 00:07:48.758 Deallocate in Write Zeroes: Not Supported 00:07:48.758 Deallocated Guard Field: 0xFFFF 00:07:48.758 Flush: Supported 00:07:48.758 Reservation: Not Supported 00:07:48.758 Namespace Sharing Capabilities: Private 00:07:48.758 Size (in LBAs): 1048576 (4GiB) 00:07:48.758 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.758 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.758 Thin Provisioning: Not Supported 00:07:48.758 Per-NS Atomic Units: No 00:07:48.758 Maximum Single Source Range Length: 128 00:07:48.758 Maximum Copy Length: 128 00:07:48.758 Maximum Source Range Count: 128 00:07:48.758 NGUID/EUI64 Never Reused: No 00:07:48.758 Namespace Write Protected: No 00:07:48.758 Number of LBA Formats: 8 00:07:48.758 Current LBA Format: LBA Format #04 00:07:48.758 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.758 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.758 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.758 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.758 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.758 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.758 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.759 00:07:48.759 NVM Specific Namespace Data 00:07:48.759 =========================== 00:07:48.759 Logical Block Storage Tag Mask: 0 00:07:48.759 Protection Information Capabilities: 00:07:48.759 16b Guard Protection Information Storage Tag Support: No 00:07:48.759 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.759 Storage Tag Check Read Support: No 00:07:48.759 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Namespace ID:2 00:07:48.759 Error Recovery Timeout: Unlimited 00:07:48.759 Command Set Identifier: NVM (00h) 00:07:48.759 Deallocate: Supported 00:07:48.759 Deallocated/Unwritten Error: Supported 00:07:48.759 Deallocated Read Value: All 0x00 00:07:48.759 Deallocate in Write Zeroes: Not Supported 00:07:48.759 Deallocated Guard Field: 0xFFFF 00:07:48.759 Flush: Supported 00:07:48.759 Reservation: Not Supported 00:07:48.759 Namespace Sharing Capabilities: Private 00:07:48.759 Size (in LBAs): 1048576 (4GiB) 00:07:48.759 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.759 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.759 Thin Provisioning: Not Supported 00:07:48.759 Per-NS Atomic Units: No 00:07:48.759 Maximum Single Source Range Length: 128 00:07:48.759 Maximum Copy Length: 128 00:07:48.759 Maximum Source Range Count: 128 00:07:48.759 NGUID/EUI64 Never Reused: No 00:07:48.759 Namespace Write Protected: No 00:07:48.759 Number of LBA Formats: 8 00:07:48.759 Current LBA Format: LBA Format #04 00:07:48.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.759 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.759 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.759 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.759 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.759 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.759 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.759 00:07:48.759 NVM Specific Namespace Data 00:07:48.759 =========================== 00:07:48.759 Logical Block Storage Tag Mask: 0 00:07:48.759 Protection Information Capabilities: 00:07:48.759 16b Guard Protection Information Storage Tag Support: No 00:07:48.759 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.759 Storage Tag Check Read Support: No 00:07:48.759 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Namespace ID:3 00:07:48.759 Error Recovery Timeout: Unlimited 00:07:48.759 Command Set Identifier: NVM (00h) 00:07:48.759 Deallocate: Supported 00:07:48.759 Deallocated/Unwritten Error: Supported 00:07:48.759 Deallocated Read Value: All 0x00 00:07:48.759 Deallocate in Write Zeroes: Not Supported 00:07:48.759 Deallocated Guard Field: 0xFFFF 00:07:48.759 Flush: Supported 00:07:48.759 Reservation: Not Supported 00:07:48.759 Namespace Sharing Capabilities: Private 00:07:48.759 Size (in LBAs): 1048576 (4GiB) 00:07:48.759 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.759 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.759 Thin Provisioning: Not Supported 00:07:48.759 Per-NS Atomic Units: No 00:07:48.759 Maximum Single Source Range Length: 128 00:07:48.759 Maximum Copy Length: 128 00:07:48.759 Maximum Source Range Count: 128 00:07:48.759 NGUID/EUI64 Never Reused: No 00:07:48.759 Namespace Write Protected: No 00:07:48.759 Number of LBA Formats: 8 00:07:48.759 Current LBA Format: LBA Format #04 00:07:48.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.759 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.759 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.759 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.759 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.759 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.759 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.759 00:07:48.759 NVM Specific Namespace Data 00:07:48.759 =========================== 00:07:48.759 Logical Block Storage Tag Mask: 0 00:07:48.759 Protection Information Capabilities: 00:07:48.759 16b Guard Protection Information Storage Tag Support: No 00:07:48.759 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.759 Storage Tag Check Read Support: No 00:07:48.759 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.759 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.759 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:49.018 ===================================================== 00:07:49.018 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:49.018 ===================================================== 00:07:49.018 Controller Capabilities/Features 00:07:49.018 ================================ 00:07:49.018 Vendor ID: 1b36 00:07:49.018 Subsystem Vendor ID: 1af4 00:07:49.018 Serial Number: 12340 00:07:49.018 Model Number: QEMU NVMe Ctrl 00:07:49.018 Firmware Version: 8.0.0 00:07:49.018 Recommended Arb Burst: 6 00:07:49.018 IEEE OUI Identifier: 00 54 52 00:07:49.018 Multi-path I/O 00:07:49.018 May have multiple subsystem ports: No 00:07:49.018 May have multiple controllers: No 00:07:49.018 Associated with SR-IOV VF: No 00:07:49.018 Max Data Transfer Size: 524288 00:07:49.018 Max Number of Namespaces: 256 00:07:49.018 Max Number of I/O Queues: 64 00:07:49.018 NVMe Specification Version (VS): 1.4 00:07:49.018 NVMe Specification Version (Identify): 1.4 00:07:49.018 Maximum Queue Entries: 2048 00:07:49.018 Contiguous Queues Required: Yes 00:07:49.018 Arbitration Mechanisms Supported 00:07:49.018 Weighted Round Robin: Not Supported 00:07:49.018 Vendor Specific: Not Supported 00:07:49.018 Reset Timeout: 7500 ms 00:07:49.018 Doorbell Stride: 4 bytes 00:07:49.018 NVM Subsystem Reset: Not Supported 00:07:49.018 Command Sets Supported 00:07:49.018 NVM Command Set: Supported 00:07:49.018 Boot Partition: Not Supported 00:07:49.018 Memory Page Size Minimum: 4096 bytes 00:07:49.018 Memory Page Size Maximum: 65536 bytes 00:07:49.018 Persistent Memory Region: Not Supported 00:07:49.018 Optional Asynchronous Events Supported 00:07:49.018 Namespace Attribute Notices: Supported 00:07:49.018 Firmware Activation Notices: Not Supported 00:07:49.018 ANA Change Notices: Not Supported 00:07:49.018 PLE Aggregate Log Change Notices: Not Supported 00:07:49.018 LBA Status Info Alert Notices: Not Supported 00:07:49.018 EGE Aggregate Log Change Notices: Not Supported 00:07:49.018 Normal NVM Subsystem Shutdown event: Not Supported 00:07:49.018 Zone Descriptor Change Notices: Not Supported 00:07:49.018 Discovery Log Change Notices: Not Supported 00:07:49.018 Controller Attributes 00:07:49.018 128-bit Host Identifier: Not Supported 00:07:49.018 Non-Operational Permissive Mode: Not Supported 00:07:49.018 NVM Sets: Not Supported 00:07:49.018 Read Recovery Levels: Not Supported 00:07:49.018 Endurance Groups: Not Supported 00:07:49.018 Predictable Latency Mode: Not Supported 00:07:49.018 Traffic Based Keep ALive: Not Supported 00:07:49.018 Namespace Granularity: Not Supported 00:07:49.018 SQ Associations: Not Supported 00:07:49.018 UUID List: Not Supported 00:07:49.018 Multi-Domain Subsystem: Not Supported 00:07:49.018 Fixed Capacity Management: Not Supported 00:07:49.018 Variable Capacity Management: Not Supported 00:07:49.018 Delete Endurance Group: Not Supported 00:07:49.018 Delete NVM Set: Not Supported 00:07:49.018 Extended LBA Formats Supported: Supported 00:07:49.018 Flexible Data Placement Supported: Not Supported 00:07:49.018 00:07:49.018 Controller Memory Buffer Support 00:07:49.018 ================================ 00:07:49.018 Supported: No 00:07:49.018 00:07:49.018 Persistent Memory Region Support 00:07:49.018 ================================ 00:07:49.018 Supported: No 00:07:49.018 00:07:49.018 Admin Command Set Attributes 00:07:49.018 ============================ 00:07:49.018 Security Send/Receive: Not Supported 00:07:49.018 Format NVM: Supported 00:07:49.018 Firmware Activate/Download: Not Supported 00:07:49.018 Namespace Management: Supported 00:07:49.018 Device Self-Test: Not Supported 00:07:49.018 Directives: Supported 00:07:49.018 NVMe-MI: Not Supported 00:07:49.018 Virtualization Management: Not Supported 00:07:49.018 Doorbell Buffer Config: Supported 00:07:49.018 Get LBA Status Capability: Not Supported 00:07:49.018 Command & Feature Lockdown Capability: Not Supported 00:07:49.018 Abort Command Limit: 4 00:07:49.018 Async Event Request Limit: 4 00:07:49.018 Number of Firmware Slots: N/A 00:07:49.018 Firmware Slot 1 Read-Only: N/A 00:07:49.018 Firmware Activation Without Reset: N/A 00:07:49.018 Multiple Update Detection Support: N/A 00:07:49.018 Firmware Update Granularity: No Information Provided 00:07:49.018 Per-Namespace SMART Log: Yes 00:07:49.018 Asymmetric Namespace Access Log Page: Not Supported 00:07:49.018 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:49.018 Command Effects Log Page: Supported 00:07:49.018 Get Log Page Extended Data: Supported 00:07:49.018 Telemetry Log Pages: Not Supported 00:07:49.018 Persistent Event Log Pages: Not Supported 00:07:49.019 Supported Log Pages Log Page: May Support 00:07:49.019 Commands Supported & Effects Log Page: Not Supported 00:07:49.019 Feature Identifiers & Effects Log Page:May Support 00:07:49.019 NVMe-MI Commands & Effects Log Page: May Support 00:07:49.019 Data Area 4 for Telemetry Log: Not Supported 00:07:49.019 Error Log Page Entries Supported: 1 00:07:49.019 Keep Alive: Not Supported 00:07:49.019 00:07:49.019 NVM Command Set Attributes 00:07:49.019 ========================== 00:07:49.019 Submission Queue Entry Size 00:07:49.019 Max: 64 00:07:49.019 Min: 64 00:07:49.019 Completion Queue Entry Size 00:07:49.019 Max: 16 00:07:49.019 Min: 16 00:07:49.019 Number of Namespaces: 256 00:07:49.019 Compare Command: Supported 00:07:49.019 Write Uncorrectable Command: Not Supported 00:07:49.019 Dataset Management Command: Supported 00:07:49.019 Write Zeroes Command: Supported 00:07:49.019 Set Features Save Field: Supported 00:07:49.019 Reservations: Not Supported 00:07:49.019 Timestamp: Supported 00:07:49.019 Copy: Supported 00:07:49.019 Volatile Write Cache: Present 00:07:49.019 Atomic Write Unit (Normal): 1 00:07:49.019 Atomic Write Unit (PFail): 1 00:07:49.019 Atomic Compare & Write Unit: 1 00:07:49.019 Fused Compare & Write: Not Supported 00:07:49.019 Scatter-Gather List 00:07:49.019 SGL Command Set: Supported 00:07:49.019 SGL Keyed: Not Supported 00:07:49.019 SGL Bit Bucket Descriptor: Not Supported 00:07:49.019 SGL Metadata Pointer: Not Supported 00:07:49.019 Oversized SGL: Not Supported 00:07:49.019 SGL Metadata Address: Not Supported 00:07:49.019 SGL Offset: Not Supported 00:07:49.019 Transport SGL Data Block: Not Supported 00:07:49.019 Replay Protected Memory Block: Not Supported 00:07:49.019 00:07:49.019 Firmware Slot Information 00:07:49.019 ========================= 00:07:49.019 Active slot: 1 00:07:49.019 Slot 1 Firmware Revision: 1.0 00:07:49.019 00:07:49.019 00:07:49.019 Commands Supported and Effects 00:07:49.019 ============================== 00:07:49.019 Admin Commands 00:07:49.019 -------------- 00:07:49.019 Delete I/O Submission Queue (00h): Supported 00:07:49.019 Create I/O Submission Queue (01h): Supported 00:07:49.019 Get Log Page (02h): Supported 00:07:49.019 Delete I/O Completion Queue (04h): Supported 00:07:49.019 Create I/O Completion Queue (05h): Supported 00:07:49.019 Identify (06h): Supported 00:07:49.019 Abort (08h): Supported 00:07:49.019 Set Features (09h): Supported 00:07:49.019 Get Features (0Ah): Supported 00:07:49.019 Asynchronous Event Request (0Ch): Supported 00:07:49.019 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:49.019 Directive Send (19h): Supported 00:07:49.019 Directive Receive (1Ah): Supported 00:07:49.019 Virtualization Management (1Ch): Supported 00:07:49.019 Doorbell Buffer Config (7Ch): Supported 00:07:49.019 Format NVM (80h): Supported LBA-Change 00:07:49.019 I/O Commands 00:07:49.019 ------------ 00:07:49.019 Flush (00h): Supported LBA-Change 00:07:49.019 Write (01h): Supported LBA-Change 00:07:49.019 Read (02h): Supported 00:07:49.019 Compare (05h): Supported 00:07:49.019 Write Zeroes (08h): Supported LBA-Change 00:07:49.019 Dataset Management (09h): Supported LBA-Change 00:07:49.019 Unknown (0Ch): Supported 00:07:49.019 Unknown (12h): Supported 00:07:49.019 Copy (19h): Supported LBA-Change 00:07:49.019 Unknown (1Dh): Supported LBA-Change 00:07:49.019 00:07:49.019 Error Log 00:07:49.019 ========= 00:07:49.019 00:07:49.019 Arbitration 00:07:49.019 =========== 00:07:49.019 Arbitration Burst: no limit 00:07:49.019 00:07:49.019 Power Management 00:07:49.019 ================ 00:07:49.019 Number of Power States: 1 00:07:49.019 Current Power State: Power State #0 00:07:49.019 Power State #0: 00:07:49.019 Max Power: 25.00 W 00:07:49.019 Non-Operational State: Operational 00:07:49.019 Entry Latency: 16 microseconds 00:07:49.019 Exit Latency: 4 microseconds 00:07:49.019 Relative Read Throughput: 0 00:07:49.019 Relative Read Latency: 0 00:07:49.019 Relative Write Throughput: 0 00:07:49.019 Relative Write Latency: 0 00:07:49.019 Idle Power: Not Reported 00:07:49.019 Active Power: Not Reported 00:07:49.019 Non-Operational Permissive Mode: Not Supported 00:07:49.019 00:07:49.019 Health Information 00:07:49.019 ================== 00:07:49.019 Critical Warnings: 00:07:49.019 Available Spare Space: OK 00:07:49.019 Temperature: OK 00:07:49.019 Device Reliability: OK 00:07:49.019 Read Only: No 00:07:49.019 Volatile Memory Backup: OK 00:07:49.019 Current Temperature: 323 Kelvin (50 Celsius) 00:07:49.019 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:49.019 Available Spare: 0% 00:07:49.019 Available Spare Threshold: 0% 00:07:49.019 Life Percentage Used: 0% 00:07:49.019 Data Units Read: 659 00:07:49.019 Data Units Written: 587 00:07:49.019 Host Read Commands: 35649 00:07:49.019 Host Write Commands: 35435 00:07:49.019 Controller Busy Time: 0 minutes 00:07:49.019 Power Cycles: 0 00:07:49.019 Power On Hours: 0 hours 00:07:49.019 Unsafe Shutdowns: 0 00:07:49.019 Unrecoverable Media Errors: 0 00:07:49.019 Lifetime Error Log Entries: 0 00:07:49.019 Warning Temperature Time: 0 minutes 00:07:49.019 Critical Temperature Time: 0 minutes 00:07:49.019 00:07:49.019 Number of Queues 00:07:49.019 ================ 00:07:49.019 Number of I/O Submission Queues: 64 00:07:49.019 Number of I/O Completion Queues: 64 00:07:49.019 00:07:49.019 ZNS Specific Controller Data 00:07:49.019 ============================ 00:07:49.019 Zone Append Size Limit: 0 00:07:49.019 00:07:49.019 00:07:49.019 Active Namespaces 00:07:49.019 ================= 00:07:49.019 Namespace ID:1 00:07:49.019 Error Recovery Timeout: Unlimited 00:07:49.019 Command Set Identifier: NVM (00h) 00:07:49.019 Deallocate: Supported 00:07:49.019 Deallocated/Unwritten Error: Supported 00:07:49.019 Deallocated Read Value: All 0x00 00:07:49.019 Deallocate in Write Zeroes: Not Supported 00:07:49.019 Deallocated Guard Field: 0xFFFF 00:07:49.019 Flush: Supported 00:07:49.019 Reservation: Not Supported 00:07:49.019 Metadata Transferred as: Separate Metadata Buffer 00:07:49.019 Namespace Sharing Capabilities: Private 00:07:49.019 Size (in LBAs): 1548666 (5GiB) 00:07:49.019 Capacity (in LBAs): 1548666 (5GiB) 00:07:49.019 Utilization (in LBAs): 1548666 (5GiB) 00:07:49.019 Thin Provisioning: Not Supported 00:07:49.019 Per-NS Atomic Units: No 00:07:49.019 Maximum Single Source Range Length: 128 00:07:49.019 Maximum Copy Length: 128 00:07:49.019 Maximum Source Range Count: 128 00:07:49.019 NGUID/EUI64 Never Reused: No 00:07:49.019 Namespace Write Protected: No 00:07:49.019 Number of LBA Formats: 8 00:07:49.019 Current LBA Format: LBA Format #07 00:07:49.019 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:49.019 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:49.019 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:49.019 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:49.019 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:49.019 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:49.019 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:49.019 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:49.019 00:07:49.019 NVM Specific Namespace Data 00:07:49.019 =========================== 00:07:49.019 Logical Block Storage Tag Mask: 0 00:07:49.019 Protection Information Capabilities: 00:07:49.019 16b Guard Protection Information Storage Tag Support: No 00:07:49.019 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:49.019 Storage Tag Check Read Support: No 00:07:49.019 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.019 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.019 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.019 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.019 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.019 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.019 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.019 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.019 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:49.019 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:49.278 ===================================================== 00:07:49.278 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:49.278 ===================================================== 00:07:49.278 Controller Capabilities/Features 00:07:49.278 ================================ 00:07:49.278 Vendor ID: 1b36 00:07:49.278 Subsystem Vendor ID: 1af4 00:07:49.278 Serial Number: 12341 00:07:49.278 Model Number: QEMU NVMe Ctrl 00:07:49.278 Firmware Version: 8.0.0 00:07:49.278 Recommended Arb Burst: 6 00:07:49.278 IEEE OUI Identifier: 00 54 52 00:07:49.278 Multi-path I/O 00:07:49.278 May have multiple subsystem ports: No 00:07:49.278 May have multiple controllers: No 00:07:49.278 Associated with SR-IOV VF: No 00:07:49.278 Max Data Transfer Size: 524288 00:07:49.278 Max Number of Namespaces: 256 00:07:49.278 Max Number of I/O Queues: 64 00:07:49.278 NVMe Specification Version (VS): 1.4 00:07:49.278 NVMe Specification Version (Identify): 1.4 00:07:49.278 Maximum Queue Entries: 2048 00:07:49.278 Contiguous Queues Required: Yes 00:07:49.278 Arbitration Mechanisms Supported 00:07:49.278 Weighted Round Robin: Not Supported 00:07:49.278 Vendor Specific: Not Supported 00:07:49.278 Reset Timeout: 7500 ms 00:07:49.278 Doorbell Stride: 4 bytes 00:07:49.278 NVM Subsystem Reset: Not Supported 00:07:49.278 Command Sets Supported 00:07:49.278 NVM Command Set: Supported 00:07:49.278 Boot Partition: Not Supported 00:07:49.278 Memory Page Size Minimum: 4096 bytes 00:07:49.278 Memory Page Size Maximum: 65536 bytes 00:07:49.278 Persistent Memory Region: Not Supported 00:07:49.278 Optional Asynchronous Events Supported 00:07:49.278 Namespace Attribute Notices: Supported 00:07:49.278 Firmware Activation Notices: Not Supported 00:07:49.278 ANA Change Notices: Not Supported 00:07:49.278 PLE Aggregate Log Change Notices: Not Supported 00:07:49.278 LBA Status Info Alert Notices: Not Supported 00:07:49.278 EGE Aggregate Log Change Notices: Not Supported 00:07:49.278 Normal NVM Subsystem Shutdown event: Not Supported 00:07:49.278 Zone Descriptor Change Notices: Not Supported 00:07:49.278 Discovery Log Change Notices: Not Supported 00:07:49.278 Controller Attributes 00:07:49.278 128-bit Host Identifier: Not Supported 00:07:49.278 Non-Operational Permissive Mode: Not Supported 00:07:49.278 NVM Sets: Not Supported 00:07:49.278 Read Recovery Levels: Not Supported 00:07:49.278 Endurance Groups: Not Supported 00:07:49.278 Predictable Latency Mode: Not Supported 00:07:49.278 Traffic Based Keep ALive: Not Supported 00:07:49.278 Namespace Granularity: Not Supported 00:07:49.278 SQ Associations: Not Supported 00:07:49.278 UUID List: Not Supported 00:07:49.279 Multi-Domain Subsystem: Not Supported 00:07:49.279 Fixed Capacity Management: Not Supported 00:07:49.279 Variable Capacity Management: Not Supported 00:07:49.279 Delete Endurance Group: Not Supported 00:07:49.279 Delete NVM Set: Not Supported 00:07:49.279 Extended LBA Formats Supported: Supported 00:07:49.279 Flexible Data Placement Supported: Not Supported 00:07:49.279 00:07:49.279 Controller Memory Buffer Support 00:07:49.279 ================================ 00:07:49.279 Supported: No 00:07:49.279 00:07:49.279 Persistent Memory Region Support 00:07:49.279 ================================ 00:07:49.279 Supported: No 00:07:49.279 00:07:49.279 Admin Command Set Attributes 00:07:49.279 ============================ 00:07:49.279 Security Send/Receive: Not Supported 00:07:49.279 Format NVM: Supported 00:07:49.279 Firmware Activate/Download: Not Supported 00:07:49.279 Namespace Management: Supported 00:07:49.279 Device Self-Test: Not Supported 00:07:49.279 Directives: Supported 00:07:49.279 NVMe-MI: Not Supported 00:07:49.279 Virtualization Management: Not Supported 00:07:49.279 Doorbell Buffer Config: Supported 00:07:49.279 Get LBA Status Capability: Not Supported 00:07:49.279 Command & Feature Lockdown Capability: Not Supported 00:07:49.279 Abort Command Limit: 4 00:07:49.279 Async Event Request Limit: 4 00:07:49.279 Number of Firmware Slots: N/A 00:07:49.279 Firmware Slot 1 Read-Only: N/A 00:07:49.279 Firmware Activation Without Reset: N/A 00:07:49.279 Multiple Update Detection Support: N/A 00:07:49.279 Firmware Update Granularity: No Information Provided 00:07:49.279 Per-Namespace SMART Log: Yes 00:07:49.279 Asymmetric Namespace Access Log Page: Not Supported 00:07:49.279 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:49.279 Command Effects Log Page: Supported 00:07:49.279 Get Log Page Extended Data: Supported 00:07:49.279 Telemetry Log Pages: Not Supported 00:07:49.279 Persistent Event Log Pages: Not Supported 00:07:49.279 Supported Log Pages Log Page: May Support 00:07:49.279 Commands Supported & Effects Log Page: Not Supported 00:07:49.279 Feature Identifiers & Effects Log Page:May Support 00:07:49.279 NVMe-MI Commands & Effects Log Page: May Support 00:07:49.279 Data Area 4 for Telemetry Log: Not Supported 00:07:49.279 Error Log Page Entries Supported: 1 00:07:49.279 Keep Alive: Not Supported 00:07:49.279 00:07:49.279 NVM Command Set Attributes 00:07:49.279 ========================== 00:07:49.279 Submission Queue Entry Size 00:07:49.279 Max: 64 00:07:49.279 Min: 64 00:07:49.279 Completion Queue Entry Size 00:07:49.279 Max: 16 00:07:49.279 Min: 16 00:07:49.279 Number of Namespaces: 256 00:07:49.279 Compare Command: Supported 00:07:49.279 Write Uncorrectable Command: Not Supported 00:07:49.279 Dataset Management Command: Supported 00:07:49.279 Write Zeroes Command: Supported 00:07:49.279 Set Features Save Field: Supported 00:07:49.279 Reservations: Not Supported 00:07:49.279 Timestamp: Supported 00:07:49.279 Copy: Supported 00:07:49.279 Volatile Write Cache: Present 00:07:49.279 Atomic Write Unit (Normal): 1 00:07:49.279 Atomic Write Unit (PFail): 1 00:07:49.279 Atomic Compare & Write Unit: 1 00:07:49.279 Fused Compare & Write: Not Supported 00:07:49.279 Scatter-Gather List 00:07:49.279 SGL Command Set: Supported 00:07:49.279 SGL Keyed: Not Supported 00:07:49.279 SGL Bit Bucket Descriptor: Not Supported 00:07:49.279 SGL Metadata Pointer: Not Supported 00:07:49.279 Oversized SGL: Not Supported 00:07:49.279 SGL Metadata Address: Not Supported 00:07:49.279 SGL Offset: Not Supported 00:07:49.279 Transport SGL Data Block: Not Supported 00:07:49.279 Replay Protected Memory Block: Not Supported 00:07:49.279 00:07:49.279 Firmware Slot Information 00:07:49.279 ========================= 00:07:49.279 Active slot: 1 00:07:49.279 Slot 1 Firmware Revision: 1.0 00:07:49.279 00:07:49.279 00:07:49.279 Commands Supported and Effects 00:07:49.279 ============================== 00:07:49.279 Admin Commands 00:07:49.279 -------------- 00:07:49.279 Delete I/O Submission Queue (00h): Supported 00:07:49.279 Create I/O Submission Queue (01h): Supported 00:07:49.279 Get Log Page (02h): Supported 00:07:49.279 Delete I/O Completion Queue (04h): Supported 00:07:49.279 Create I/O Completion Queue (05h): Supported 00:07:49.279 Identify (06h): Supported 00:07:49.279 Abort (08h): Supported 00:07:49.279 Set Features (09h): Supported 00:07:49.279 Get Features (0Ah): Supported 00:07:49.279 Asynchronous Event Request (0Ch): Supported 00:07:49.279 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:49.279 Directive Send (19h): Supported 00:07:49.279 Directive Receive (1Ah): Supported 00:07:49.279 Virtualization Management (1Ch): Supported 00:07:49.279 Doorbell Buffer Config (7Ch): Supported 00:07:49.279 Format NVM (80h): Supported LBA-Change 00:07:49.279 I/O Commands 00:07:49.279 ------------ 00:07:49.279 Flush (00h): Supported LBA-Change 00:07:49.279 Write (01h): Supported LBA-Change 00:07:49.279 Read (02h): Supported 00:07:49.279 Compare (05h): Supported 00:07:49.279 Write Zeroes (08h): Supported LBA-Change 00:07:49.279 Dataset Management (09h): Supported LBA-Change 00:07:49.279 Unknown (0Ch): Supported 00:07:49.279 Unknown (12h): Supported 00:07:49.279 Copy (19h): Supported LBA-Change 00:07:49.279 Unknown (1Dh): Supported LBA-Change 00:07:49.279 00:07:49.279 Error Log 00:07:49.279 ========= 00:07:49.279 00:07:49.279 Arbitration 00:07:49.279 =========== 00:07:49.279 Arbitration Burst: no limit 00:07:49.279 00:07:49.279 Power Management 00:07:49.279 ================ 00:07:49.279 Number of Power States: 1 00:07:49.279 Current Power State: Power State #0 00:07:49.279 Power State #0: 00:07:49.279 Max Power: 25.00 W 00:07:49.279 Non-Operational State: Operational 00:07:49.279 Entry Latency: 16 microseconds 00:07:49.279 Exit Latency: 4 microseconds 00:07:49.279 Relative Read Throughput: 0 00:07:49.279 Relative Read Latency: 0 00:07:49.279 Relative Write Throughput: 0 00:07:49.279 Relative Write Latency: 0 00:07:49.279 Idle Power: Not Reported 00:07:49.279 Active Power: Not Reported 00:07:49.279 Non-Operational Permissive Mode: Not Supported 00:07:49.279 00:07:49.279 Health Information 00:07:49.279 ================== 00:07:49.279 Critical Warnings: 00:07:49.279 Available Spare Space: OK 00:07:49.279 Temperature: OK 00:07:49.279 Device Reliability: OK 00:07:49.279 Read Only: No 00:07:49.279 Volatile Memory Backup: OK 00:07:49.279 Current Temperature: 323 Kelvin (50 Celsius) 00:07:49.279 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:49.279 Available Spare: 0% 00:07:49.279 Available Spare Threshold: 0% 00:07:49.279 Life Percentage Used: 0% 00:07:49.279 Data Units Read: 1037 00:07:49.279 Data Units Written: 904 00:07:49.279 Host Read Commands: 54950 00:07:49.279 Host Write Commands: 53751 00:07:49.279 Controller Busy Time: 0 minutes 00:07:49.279 Power Cycles: 0 00:07:49.279 Power On Hours: 0 hours 00:07:49.279 Unsafe Shutdowns: 0 00:07:49.279 Unrecoverable Media Errors: 0 00:07:49.279 Lifetime Error Log Entries: 0 00:07:49.279 Warning Temperature Time: 0 minutes 00:07:49.279 Critical Temperature Time: 0 minutes 00:07:49.279 00:07:49.279 Number of Queues 00:07:49.279 ================ 00:07:49.279 Number of I/O Submission Queues: 64 00:07:49.279 Number of I/O Completion Queues: 64 00:07:49.279 00:07:49.279 ZNS Specific Controller Data 00:07:49.279 ============================ 00:07:49.279 Zone Append Size Limit: 0 00:07:49.279 00:07:49.279 00:07:49.279 Active Namespaces 00:07:49.279 ================= 00:07:49.279 Namespace ID:1 00:07:49.279 Error Recovery Timeout: Unlimited 00:07:49.279 Command Set Identifier: NVM (00h) 00:07:49.279 Deallocate: Supported 00:07:49.279 Deallocated/Unwritten Error: Supported 00:07:49.279 Deallocated Read Value: All 0x00 00:07:49.279 Deallocate in Write Zeroes: Not Supported 00:07:49.279 Deallocated Guard Field: 0xFFFF 00:07:49.279 Flush: Supported 00:07:49.279 Reservation: Not Supported 00:07:49.279 Namespace Sharing Capabilities: Private 00:07:49.279 Size (in LBAs): 1310720 (5GiB) 00:07:49.279 Capacity (in LBAs): 1310720 (5GiB) 00:07:49.279 Utilization (in LBAs): 1310720 (5GiB) 00:07:49.279 Thin Provisioning: Not Supported 00:07:49.279 Per-NS Atomic Units: No 00:07:49.279 Maximum Single Source Range Length: 128 00:07:49.279 Maximum Copy Length: 128 00:07:49.279 Maximum Source Range Count: 128 00:07:49.279 NGUID/EUI64 Never Reused: No 00:07:49.279 Namespace Write Protected: No 00:07:49.279 Number of LBA Formats: 8 00:07:49.279 Current LBA Format: LBA Format #04 00:07:49.279 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:49.279 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:49.279 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:49.279 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:49.279 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:49.279 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:49.279 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:49.279 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:49.280 00:07:49.280 NVM Specific Namespace Data 00:07:49.280 =========================== 00:07:49.280 Logical Block Storage Tag Mask: 0 00:07:49.280 Protection Information Capabilities: 00:07:49.280 16b Guard Protection Information Storage Tag Support: No 00:07:49.280 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:49.280 Storage Tag Check Read Support: No 00:07:49.280 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.280 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.280 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.280 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.280 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.280 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.280 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.280 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.280 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:49.280 14:00:50 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:49.544 ===================================================== 00:07:49.544 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:49.544 ===================================================== 00:07:49.544 Controller Capabilities/Features 00:07:49.544 ================================ 00:07:49.544 Vendor ID: 1b36 00:07:49.544 Subsystem Vendor ID: 1af4 00:07:49.544 Serial Number: 12342 00:07:49.544 Model Number: QEMU NVMe Ctrl 00:07:49.544 Firmware Version: 8.0.0 00:07:49.544 Recommended Arb Burst: 6 00:07:49.544 IEEE OUI Identifier: 00 54 52 00:07:49.544 Multi-path I/O 00:07:49.544 May have multiple subsystem ports: No 00:07:49.544 May have multiple controllers: No 00:07:49.544 Associated with SR-IOV VF: No 00:07:49.544 Max Data Transfer Size: 524288 00:07:49.544 Max Number of Namespaces: 256 00:07:49.544 Max Number of I/O Queues: 64 00:07:49.544 NVMe Specification Version (VS): 1.4 00:07:49.544 NVMe Specification Version (Identify): 1.4 00:07:49.544 Maximum Queue Entries: 2048 00:07:49.544 Contiguous Queues Required: Yes 00:07:49.544 Arbitration Mechanisms Supported 00:07:49.544 Weighted Round Robin: Not Supported 00:07:49.544 Vendor Specific: Not Supported 00:07:49.544 Reset Timeout: 7500 ms 00:07:49.544 Doorbell Stride: 4 bytes 00:07:49.544 NVM Subsystem Reset: Not Supported 00:07:49.544 Command Sets Supported 00:07:49.544 NVM Command Set: Supported 00:07:49.544 Boot Partition: Not Supported 00:07:49.544 Memory Page Size Minimum: 4096 bytes 00:07:49.544 Memory Page Size Maximum: 65536 bytes 00:07:49.544 Persistent Memory Region: Not Supported 00:07:49.544 Optional Asynchronous Events Supported 00:07:49.544 Namespace Attribute Notices: Supported 00:07:49.544 Firmware Activation Notices: Not Supported 00:07:49.544 ANA Change Notices: Not Supported 00:07:49.544 PLE Aggregate Log Change Notices: Not Supported 00:07:49.544 LBA Status Info Alert Notices: Not Supported 00:07:49.544 EGE Aggregate Log Change Notices: Not Supported 00:07:49.544 Normal NVM Subsystem Shutdown event: Not Supported 00:07:49.544 Zone Descriptor Change Notices: Not Supported 00:07:49.544 Discovery Log Change Notices: Not Supported 00:07:49.544 Controller Attributes 00:07:49.544 128-bit Host Identifier: Not Supported 00:07:49.544 Non-Operational Permissive Mode: Not Supported 00:07:49.544 NVM Sets: Not Supported 00:07:49.544 Read Recovery Levels: Not Supported 00:07:49.544 Endurance Groups: Not Supported 00:07:49.544 Predictable Latency Mode: Not Supported 00:07:49.544 Traffic Based Keep ALive: Not Supported 00:07:49.544 Namespace Granularity: Not Supported 00:07:49.544 SQ Associations: Not Supported 00:07:49.544 UUID List: Not Supported 00:07:49.544 Multi-Domain Subsystem: Not Supported 00:07:49.544 Fixed Capacity Management: Not Supported 00:07:49.544 Variable Capacity Management: Not Supported 00:07:49.544 Delete Endurance Group: Not Supported 00:07:49.544 Delete NVM Set: Not Supported 00:07:49.544 Extended LBA Formats Supported: Supported 00:07:49.544 Flexible Data Placement Supported: Not Supported 00:07:49.544 00:07:49.544 Controller Memory Buffer Support 00:07:49.544 ================================ 00:07:49.544 Supported: No 00:07:49.544 00:07:49.544 Persistent Memory Region Support 00:07:49.544 ================================ 00:07:49.544 Supported: No 00:07:49.544 00:07:49.544 Admin Command Set Attributes 00:07:49.544 ============================ 00:07:49.544 Security Send/Receive: Not Supported 00:07:49.544 Format NVM: Supported 00:07:49.544 Firmware Activate/Download: Not Supported 00:07:49.544 Namespace Management: Supported 00:07:49.544 Device Self-Test: Not Supported 00:07:49.544 Directives: Supported 00:07:49.544 NVMe-MI: Not Supported 00:07:49.544 Virtualization Management: Not Supported 00:07:49.544 Doorbell Buffer Config: Supported 00:07:49.544 Get LBA Status Capability: Not Supported 00:07:49.544 Command & Feature Lockdown Capability: Not Supported 00:07:49.544 Abort Command Limit: 4 00:07:49.544 Async Event Request Limit: 4 00:07:49.544 Number of Firmware Slots: N/A 00:07:49.544 Firmware Slot 1 Read-Only: N/A 00:07:49.544 Firmware Activation Without Reset: N/A 00:07:49.544 Multiple Update Detection Support: N/A 00:07:49.544 Firmware Update Granularity: No Information Provided 00:07:49.544 Per-Namespace SMART Log: Yes 00:07:49.544 Asymmetric Namespace Access Log Page: Not Supported 00:07:49.544 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:49.544 Command Effects Log Page: Supported 00:07:49.544 Get Log Page Extended Data: Supported 00:07:49.544 Telemetry Log Pages: Not Supported 00:07:49.544 Persistent Event Log Pages: Not Supported 00:07:49.544 Supported Log Pages Log Page: May Support 00:07:49.544 Commands Supported & Effects Log Page: Not Supported 00:07:49.544 Feature Identifiers & Effects Log Page:May Support 00:07:49.544 NVMe-MI Commands & Effects Log Page: May Support 00:07:49.544 Data Area 4 for Telemetry Log: Not Supported 00:07:49.544 Error Log Page Entries Supported: 1 00:07:49.544 Keep Alive: Not Supported 00:07:49.544 00:07:49.544 NVM Command Set Attributes 00:07:49.544 ========================== 00:07:49.544 Submission Queue Entry Size 00:07:49.544 Max: 64 00:07:49.544 Min: 64 00:07:49.544 Completion Queue Entry Size 00:07:49.544 Max: 16 00:07:49.544 Min: 16 00:07:49.544 Number of Namespaces: 256 00:07:49.544 Compare Command: Supported 00:07:49.544 Write Uncorrectable Command: Not Supported 00:07:49.544 Dataset Management Command: Supported 00:07:49.544 Write Zeroes Command: Supported 00:07:49.544 Set Features Save Field: Supported 00:07:49.544 Reservations: Not Supported 00:07:49.544 Timestamp: Supported 00:07:49.544 Copy: Supported 00:07:49.544 Volatile Write Cache: Present 00:07:49.544 Atomic Write Unit (Normal): 1 00:07:49.544 Atomic Write Unit (PFail): 1 00:07:49.544 Atomic Compare & Write Unit: 1 00:07:49.544 Fused Compare & Write: Not Supported 00:07:49.544 Scatter-Gather List 00:07:49.544 SGL Command Set: Supported 00:07:49.544 SGL Keyed: Not Supported 00:07:49.544 SGL Bit Bucket Descriptor: Not Supported 00:07:49.544 SGL Metadata Pointer: Not Supported 00:07:49.544 Oversized SGL: Not Supported 00:07:49.544 SGL Metadata Address: Not Supported 00:07:49.544 SGL Offset: Not Supported 00:07:49.544 Transport SGL Data Block: Not Supported 00:07:49.544 Replay Protected Memory Block: Not Supported 00:07:49.544 00:07:49.544 Firmware Slot Information 00:07:49.544 ========================= 00:07:49.544 Active slot: 1 00:07:49.544 Slot 1 Firmware Revision: 1.0 00:07:49.544 00:07:49.544 00:07:49.544 Commands Supported and Effects 00:07:49.544 ============================== 00:07:49.544 Admin Commands 00:07:49.544 -------------- 00:07:49.544 Delete I/O Submission Queue (00h): Supported 00:07:49.544 Create I/O Submission Queue (01h): Supported 00:07:49.544 Get Log Page (02h): Supported 00:07:49.545 Delete I/O Completion Queue (04h): Supported 00:07:49.545 Create I/O Completion Queue (05h): Supported 00:07:49.545 Identify (06h): Supported 00:07:49.545 Abort (08h): Supported 00:07:49.545 Set Features (09h): Supported 00:07:49.545 Get Features (0Ah): Supported 00:07:49.545 Asynchronous Event Request (0Ch): Supported 00:07:49.545 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:49.545 Directive Send (19h): Supported 00:07:49.545 Directive Receive (1Ah): Supported 00:07:49.545 Virtualization Management (1Ch): Supported 00:07:49.545 Doorbell Buffer Config (7Ch): Supported 00:07:49.545 Format NVM (80h): Supported LBA-Change 00:07:49.545 I/O Commands 00:07:49.545 ------------ 00:07:49.545 Flush (00h): Supported LBA-Change 00:07:49.545 Write (01h): Supported LBA-Change 00:07:49.545 Read (02h): Supported 00:07:49.545 Compare (05h): Supported 00:07:49.545 Write Zeroes (08h): Supported LBA-Change 00:07:49.545 Dataset Management (09h): Supported LBA-Change 00:07:49.545 Unknown (0Ch): Supported 00:07:49.545 Unknown (12h): Supported 00:07:49.545 Copy (19h): Supported LBA-Change 00:07:49.545 Unknown (1Dh): Supported LBA-Change 00:07:49.545 00:07:49.545 Error Log 00:07:49.545 ========= 00:07:49.545 00:07:49.545 Arbitration 00:07:49.545 =========== 00:07:49.545 Arbitration Burst: no limit 00:07:49.545 00:07:49.545 Power Management 00:07:49.545 ================ 00:07:49.545 Number of Power States: 1 00:07:49.545 Current Power State: Power State #0 00:07:49.545 Power State #0: 00:07:49.545 Max Power: 25.00 W 00:07:49.545 Non-Operational State: Operational 00:07:49.545 Entry Latency: 16 microseconds 00:07:49.545 Exit Latency: 4 microseconds 00:07:49.545 Relative Read Throughput: 0 00:07:49.545 Relative Read Latency: 0 00:07:49.545 Relative Write Throughput: 0 00:07:49.545 Relative Write Latency: 0 00:07:49.545 Idle Power: Not Reported 00:07:49.545 Active Power: Not Reported 00:07:49.545 Non-Operational Permissive Mode: Not Supported 00:07:49.545 00:07:49.545 Health Information 00:07:49.545 ================== 00:07:49.545 Critical Warnings: 00:07:49.545 Available Spare Space: OK 00:07:49.545 Temperature: OK 00:07:49.545 Device Reliability: OK 00:07:49.545 Read Only: No 00:07:49.545 Volatile Memory Backup: OK 00:07:49.545 Current Temperature: 323 Kelvin (50 Celsius) 00:07:49.545 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:49.545 Available Spare: 0% 00:07:49.545 Available Spare Threshold: 0% 00:07:49.545 Life Percentage Used: 0% 00:07:49.545 Data Units Read: 2107 00:07:49.545 Data Units Written: 1894 00:07:49.545 Host Read Commands: 108683 00:07:49.545 Host Write Commands: 106954 00:07:49.545 Controller Busy Time: 0 minutes 00:07:49.545 Power Cycles: 0 00:07:49.545 Power On Hours: 0 hours 00:07:49.545 Unsafe Shutdowns: 0 00:07:49.545 Unrecoverable Media Errors: 0 00:07:49.545 Lifetime Error Log Entries: 0 00:07:49.545 Warning Temperature Time: 0 minutes 00:07:49.545 Critical Temperature Time: 0 minutes 00:07:49.545 00:07:49.545 Number of Queues 00:07:49.545 ================ 00:07:49.545 Number of I/O Submission Queues: 64 00:07:49.545 Number of I/O Completion Queues: 64 00:07:49.545 00:07:49.545 ZNS Specific Controller Data 00:07:49.545 ============================ 00:07:49.545 Zone Append Size Limit: 0 00:07:49.545 00:07:49.545 00:07:49.545 Active Namespaces 00:07:49.545 ================= 00:07:49.545 Namespace ID:1 00:07:49.545 Error Recovery Timeout: Unlimited 00:07:49.545 Command Set Identifier: NVM (00h) 00:07:49.545 Deallocate: Supported 00:07:49.545 Deallocated/Unwritten Error: Supported 00:07:49.545 Deallocated Read Value: All 0x00 00:07:49.545 Deallocate in Write Zeroes: Not Supported 00:07:49.545 Deallocated Guard Field: 0xFFFF 00:07:49.545 Flush: Supported 00:07:49.545 Reservation: Not Supported 00:07:49.545 Namespace Sharing Capabilities: Private 00:07:49.545 Size (in LBAs): 1048576 (4GiB) 00:07:49.545 Capacity (in LBAs): 1048576 (4GiB) 00:07:49.545 Utilization (in LBAs): 1048576 (4GiB) 00:07:49.545 Thin Provisioning: Not Supported 00:07:49.545 Per-NS Atomic Units: No 00:07:49.545 Maximum Single Source Range Length: 128 00:07:49.545 Maximum Copy Length: 128 00:07:49.545 Maximum Source Range Count: 128 00:07:49.545 NGUID/EUI64 Never Reused: No 00:07:49.545 Namespace Write Protected: No 00:07:49.545 Number of LBA Formats: 8 00:07:49.545 Current LBA Format: LBA Format #04 00:07:49.545 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:49.545 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:49.545 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:49.545 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:49.545 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:49.545 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:49.545 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:49.545 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:49.545 00:07:49.545 NVM Specific Namespace Data 00:07:49.545 =========================== 00:07:49.545 Logical Block Storage Tag Mask: 0 00:07:49.545 Protection Information Capabilities: 00:07:49.545 16b Guard Protection Information Storage Tag Support: No 00:07:49.545 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:49.545 Storage Tag Check Read Support: No 00:07:49.545 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Namespace ID:2 00:07:49.545 Error Recovery Timeout: Unlimited 00:07:49.545 Command Set Identifier: NVM (00h) 00:07:49.545 Deallocate: Supported 00:07:49.545 Deallocated/Unwritten Error: Supported 00:07:49.545 Deallocated Read Value: All 0x00 00:07:49.545 Deallocate in Write Zeroes: Not Supported 00:07:49.545 Deallocated Guard Field: 0xFFFF 00:07:49.545 Flush: Supported 00:07:49.545 Reservation: Not Supported 00:07:49.545 Namespace Sharing Capabilities: Private 00:07:49.545 Size (in LBAs): 1048576 (4GiB) 00:07:49.545 Capacity (in LBAs): 1048576 (4GiB) 00:07:49.545 Utilization (in LBAs): 1048576 (4GiB) 00:07:49.545 Thin Provisioning: Not Supported 00:07:49.545 Per-NS Atomic Units: No 00:07:49.545 Maximum Single Source Range Length: 128 00:07:49.545 Maximum Copy Length: 128 00:07:49.545 Maximum Source Range Count: 128 00:07:49.545 NGUID/EUI64 Never Reused: No 00:07:49.545 Namespace Write Protected: No 00:07:49.545 Number of LBA Formats: 8 00:07:49.545 Current LBA Format: LBA Format #04 00:07:49.545 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:49.545 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:49.545 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:49.545 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:49.545 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:49.545 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:49.545 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:49.545 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:49.545 00:07:49.545 NVM Specific Namespace Data 00:07:49.545 =========================== 00:07:49.545 Logical Block Storage Tag Mask: 0 00:07:49.545 Protection Information Capabilities: 00:07:49.545 16b Guard Protection Information Storage Tag Support: No 00:07:49.545 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:49.545 Storage Tag Check Read Support: No 00:07:49.545 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.545 Namespace ID:3 00:07:49.545 Error Recovery Timeout: Unlimited 00:07:49.545 Command Set Identifier: NVM (00h) 00:07:49.545 Deallocate: Supported 00:07:49.545 Deallocated/Unwritten Error: Supported 00:07:49.545 Deallocated Read Value: All 0x00 00:07:49.545 Deallocate in Write Zeroes: Not Supported 00:07:49.545 Deallocated Guard Field: 0xFFFF 00:07:49.545 Flush: Supported 00:07:49.545 Reservation: Not Supported 00:07:49.545 Namespace Sharing Capabilities: Private 00:07:49.545 Size (in LBAs): 1048576 (4GiB) 00:07:49.545 Capacity (in LBAs): 1048576 (4GiB) 00:07:49.545 Utilization (in LBAs): 1048576 (4GiB) 00:07:49.545 Thin Provisioning: Not Supported 00:07:49.545 Per-NS Atomic Units: No 00:07:49.545 Maximum Single Source Range Length: 128 00:07:49.546 Maximum Copy Length: 128 00:07:49.546 Maximum Source Range Count: 128 00:07:49.546 NGUID/EUI64 Never Reused: No 00:07:49.546 Namespace Write Protected: No 00:07:49.546 Number of LBA Formats: 8 00:07:49.546 Current LBA Format: LBA Format #04 00:07:49.546 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:49.546 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:49.546 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:49.546 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:49.546 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:49.546 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:49.546 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:49.546 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:49.546 00:07:49.546 NVM Specific Namespace Data 00:07:49.546 =========================== 00:07:49.546 Logical Block Storage Tag Mask: 0 00:07:49.546 Protection Information Capabilities: 00:07:49.546 16b Guard Protection Information Storage Tag Support: No 00:07:49.546 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:49.546 Storage Tag Check Read Support: No 00:07:49.546 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.546 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.546 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.546 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.546 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.546 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.546 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.546 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.546 14:00:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:49.546 14:00:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:49.804 ===================================================== 00:07:49.804 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:49.804 ===================================================== 00:07:49.804 Controller Capabilities/Features 00:07:49.804 ================================ 00:07:49.804 Vendor ID: 1b36 00:07:49.804 Subsystem Vendor ID: 1af4 00:07:49.804 Serial Number: 12343 00:07:49.804 Model Number: QEMU NVMe Ctrl 00:07:49.804 Firmware Version: 8.0.0 00:07:49.804 Recommended Arb Burst: 6 00:07:49.804 IEEE OUI Identifier: 00 54 52 00:07:49.804 Multi-path I/O 00:07:49.804 May have multiple subsystem ports: No 00:07:49.804 May have multiple controllers: Yes 00:07:49.804 Associated with SR-IOV VF: No 00:07:49.804 Max Data Transfer Size: 524288 00:07:49.804 Max Number of Namespaces: 256 00:07:49.804 Max Number of I/O Queues: 64 00:07:49.804 NVMe Specification Version (VS): 1.4 00:07:49.804 NVMe Specification Version (Identify): 1.4 00:07:49.804 Maximum Queue Entries: 2048 00:07:49.804 Contiguous Queues Required: Yes 00:07:49.804 Arbitration Mechanisms Supported 00:07:49.804 Weighted Round Robin: Not Supported 00:07:49.804 Vendor Specific: Not Supported 00:07:49.804 Reset Timeout: 7500 ms 00:07:49.804 Doorbell Stride: 4 bytes 00:07:49.804 NVM Subsystem Reset: Not Supported 00:07:49.804 Command Sets Supported 00:07:49.804 NVM Command Set: Supported 00:07:49.804 Boot Partition: Not Supported 00:07:49.804 Memory Page Size Minimum: 4096 bytes 00:07:49.804 Memory Page Size Maximum: 65536 bytes 00:07:49.804 Persistent Memory Region: Not Supported 00:07:49.804 Optional Asynchronous Events Supported 00:07:49.804 Namespace Attribute Notices: Supported 00:07:49.804 Firmware Activation Notices: Not Supported 00:07:49.804 ANA Change Notices: Not Supported 00:07:49.804 PLE Aggregate Log Change Notices: Not Supported 00:07:49.804 LBA Status Info Alert Notices: Not Supported 00:07:49.804 EGE Aggregate Log Change Notices: Not Supported 00:07:49.804 Normal NVM Subsystem Shutdown event: Not Supported 00:07:49.804 Zone Descriptor Change Notices: Not Supported 00:07:49.804 Discovery Log Change Notices: Not Supported 00:07:49.804 Controller Attributes 00:07:49.804 128-bit Host Identifier: Not Supported 00:07:49.804 Non-Operational Permissive Mode: Not Supported 00:07:49.804 NVM Sets: Not Supported 00:07:49.804 Read Recovery Levels: Not Supported 00:07:49.804 Endurance Groups: Supported 00:07:49.804 Predictable Latency Mode: Not Supported 00:07:49.804 Traffic Based Keep ALive: Not Supported 00:07:49.804 Namespace Granularity: Not Supported 00:07:49.804 SQ Associations: Not Supported 00:07:49.804 UUID List: Not Supported 00:07:49.804 Multi-Domain Subsystem: Not Supported 00:07:49.804 Fixed Capacity Management: Not Supported 00:07:49.804 Variable Capacity Management: Not Supported 00:07:49.804 Delete Endurance Group: Not Supported 00:07:49.804 Delete NVM Set: Not Supported 00:07:49.804 Extended LBA Formats Supported: Supported 00:07:49.804 Flexible Data Placement Supported: Supported 00:07:49.804 00:07:49.804 Controller Memory Buffer Support 00:07:49.804 ================================ 00:07:49.804 Supported: No 00:07:49.804 00:07:49.804 Persistent Memory Region Support 00:07:49.804 ================================ 00:07:49.804 Supported: No 00:07:49.804 00:07:49.804 Admin Command Set Attributes 00:07:49.804 ============================ 00:07:49.804 Security Send/Receive: Not Supported 00:07:49.804 Format NVM: Supported 00:07:49.804 Firmware Activate/Download: Not Supported 00:07:49.804 Namespace Management: Supported 00:07:49.804 Device Self-Test: Not Supported 00:07:49.804 Directives: Supported 00:07:49.804 NVMe-MI: Not Supported 00:07:49.804 Virtualization Management: Not Supported 00:07:49.804 Doorbell Buffer Config: Supported 00:07:49.804 Get LBA Status Capability: Not Supported 00:07:49.804 Command & Feature Lockdown Capability: Not Supported 00:07:49.804 Abort Command Limit: 4 00:07:49.804 Async Event Request Limit: 4 00:07:49.804 Number of Firmware Slots: N/A 00:07:49.804 Firmware Slot 1 Read-Only: N/A 00:07:49.804 Firmware Activation Without Reset: N/A 00:07:49.804 Multiple Update Detection Support: N/A 00:07:49.804 Firmware Update Granularity: No Information Provided 00:07:49.804 Per-Namespace SMART Log: Yes 00:07:49.804 Asymmetric Namespace Access Log Page: Not Supported 00:07:49.804 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:49.804 Command Effects Log Page: Supported 00:07:49.804 Get Log Page Extended Data: Supported 00:07:49.804 Telemetry Log Pages: Not Supported 00:07:49.804 Persistent Event Log Pages: Not Supported 00:07:49.804 Supported Log Pages Log Page: May Support 00:07:49.804 Commands Supported & Effects Log Page: Not Supported 00:07:49.804 Feature Identifiers & Effects Log Page:May Support 00:07:49.804 NVMe-MI Commands & Effects Log Page: May Support 00:07:49.804 Data Area 4 for Telemetry Log: Not Supported 00:07:49.805 Error Log Page Entries Supported: 1 00:07:49.805 Keep Alive: Not Supported 00:07:49.805 00:07:49.805 NVM Command Set Attributes 00:07:49.805 ========================== 00:07:49.805 Submission Queue Entry Size 00:07:49.805 Max: 64 00:07:49.805 Min: 64 00:07:49.805 Completion Queue Entry Size 00:07:49.805 Max: 16 00:07:49.805 Min: 16 00:07:49.805 Number of Namespaces: 256 00:07:49.805 Compare Command: Supported 00:07:49.805 Write Uncorrectable Command: Not Supported 00:07:49.805 Dataset Management Command: Supported 00:07:49.805 Write Zeroes Command: Supported 00:07:49.805 Set Features Save Field: Supported 00:07:49.805 Reservations: Not Supported 00:07:49.805 Timestamp: Supported 00:07:49.805 Copy: Supported 00:07:49.805 Volatile Write Cache: Present 00:07:49.805 Atomic Write Unit (Normal): 1 00:07:49.805 Atomic Write Unit (PFail): 1 00:07:49.805 Atomic Compare & Write Unit: 1 00:07:49.805 Fused Compare & Write: Not Supported 00:07:49.805 Scatter-Gather List 00:07:49.805 SGL Command Set: Supported 00:07:49.805 SGL Keyed: Not Supported 00:07:49.805 SGL Bit Bucket Descriptor: Not Supported 00:07:49.805 SGL Metadata Pointer: Not Supported 00:07:49.805 Oversized SGL: Not Supported 00:07:49.805 SGL Metadata Address: Not Supported 00:07:49.805 SGL Offset: Not Supported 00:07:49.805 Transport SGL Data Block: Not Supported 00:07:49.805 Replay Protected Memory Block: Not Supported 00:07:49.805 00:07:49.805 Firmware Slot Information 00:07:49.805 ========================= 00:07:49.805 Active slot: 1 00:07:49.805 Slot 1 Firmware Revision: 1.0 00:07:49.805 00:07:49.805 00:07:49.805 Commands Supported and Effects 00:07:49.805 ============================== 00:07:49.805 Admin Commands 00:07:49.805 -------------- 00:07:49.805 Delete I/O Submission Queue (00h): Supported 00:07:49.805 Create I/O Submission Queue (01h): Supported 00:07:49.805 Get Log Page (02h): Supported 00:07:49.805 Delete I/O Completion Queue (04h): Supported 00:07:49.805 Create I/O Completion Queue (05h): Supported 00:07:49.805 Identify (06h): Supported 00:07:49.805 Abort (08h): Supported 00:07:49.805 Set Features (09h): Supported 00:07:49.805 Get Features (0Ah): Supported 00:07:49.805 Asynchronous Event Request (0Ch): Supported 00:07:49.805 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:49.805 Directive Send (19h): Supported 00:07:49.805 Directive Receive (1Ah): Supported 00:07:49.805 Virtualization Management (1Ch): Supported 00:07:49.805 Doorbell Buffer Config (7Ch): Supported 00:07:49.805 Format NVM (80h): Supported LBA-Change 00:07:49.805 I/O Commands 00:07:49.805 ------------ 00:07:49.805 Flush (00h): Supported LBA-Change 00:07:49.805 Write (01h): Supported LBA-Change 00:07:49.805 Read (02h): Supported 00:07:49.805 Compare (05h): Supported 00:07:49.805 Write Zeroes (08h): Supported LBA-Change 00:07:49.805 Dataset Management (09h): Supported LBA-Change 00:07:49.805 Unknown (0Ch): Supported 00:07:49.805 Unknown (12h): Supported 00:07:49.805 Copy (19h): Supported LBA-Change 00:07:49.805 Unknown (1Dh): Supported LBA-Change 00:07:49.805 00:07:49.805 Error Log 00:07:49.805 ========= 00:07:49.805 00:07:49.805 Arbitration 00:07:49.805 =========== 00:07:49.805 Arbitration Burst: no limit 00:07:49.805 00:07:49.805 Power Management 00:07:49.805 ================ 00:07:49.805 Number of Power States: 1 00:07:49.805 Current Power State: Power State #0 00:07:49.805 Power State #0: 00:07:49.805 Max Power: 25.00 W 00:07:49.805 Non-Operational State: Operational 00:07:49.805 Entry Latency: 16 microseconds 00:07:49.805 Exit Latency: 4 microseconds 00:07:49.805 Relative Read Throughput: 0 00:07:49.805 Relative Read Latency: 0 00:07:49.805 Relative Write Throughput: 0 00:07:49.805 Relative Write Latency: 0 00:07:49.805 Idle Power: Not Reported 00:07:49.805 Active Power: Not Reported 00:07:49.805 Non-Operational Permissive Mode: Not Supported 00:07:49.805 00:07:49.805 Health Information 00:07:49.805 ================== 00:07:49.805 Critical Warnings: 00:07:49.805 Available Spare Space: OK 00:07:49.805 Temperature: OK 00:07:49.805 Device Reliability: OK 00:07:49.805 Read Only: No 00:07:49.805 Volatile Memory Backup: OK 00:07:49.805 Current Temperature: 323 Kelvin (50 Celsius) 00:07:49.805 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:49.805 Available Spare: 0% 00:07:49.805 Available Spare Threshold: 0% 00:07:49.805 Life Percentage Used: 0% 00:07:49.805 Data Units Read: 816 00:07:49.805 Data Units Written: 745 00:07:49.805 Host Read Commands: 37235 00:07:49.805 Host Write Commands: 36658 00:07:49.805 Controller Busy Time: 0 minutes 00:07:49.805 Power Cycles: 0 00:07:49.805 Power On Hours: 0 hours 00:07:49.805 Unsafe Shutdowns: 0 00:07:49.805 Unrecoverable Media Errors: 0 00:07:49.805 Lifetime Error Log Entries: 0 00:07:49.805 Warning Temperature Time: 0 minutes 00:07:49.805 Critical Temperature Time: 0 minutes 00:07:49.805 00:07:49.805 Number of Queues 00:07:49.805 ================ 00:07:49.805 Number of I/O Submission Queues: 64 00:07:49.805 Number of I/O Completion Queues: 64 00:07:49.805 00:07:49.805 ZNS Specific Controller Data 00:07:49.805 ============================ 00:07:49.805 Zone Append Size Limit: 0 00:07:49.805 00:07:49.805 00:07:49.805 Active Namespaces 00:07:49.805 ================= 00:07:49.805 Namespace ID:1 00:07:49.805 Error Recovery Timeout: Unlimited 00:07:49.805 Command Set Identifier: NVM (00h) 00:07:49.805 Deallocate: Supported 00:07:49.805 Deallocated/Unwritten Error: Supported 00:07:49.805 Deallocated Read Value: All 0x00 00:07:49.805 Deallocate in Write Zeroes: Not Supported 00:07:49.805 Deallocated Guard Field: 0xFFFF 00:07:49.805 Flush: Supported 00:07:49.805 Reservation: Not Supported 00:07:49.805 Namespace Sharing Capabilities: Multiple Controllers 00:07:49.805 Size (in LBAs): 262144 (1GiB) 00:07:49.805 Capacity (in LBAs): 262144 (1GiB) 00:07:49.805 Utilization (in LBAs): 262144 (1GiB) 00:07:49.805 Thin Provisioning: Not Supported 00:07:49.805 Per-NS Atomic Units: No 00:07:49.805 Maximum Single Source Range Length: 128 00:07:49.805 Maximum Copy Length: 128 00:07:49.805 Maximum Source Range Count: 128 00:07:49.805 NGUID/EUI64 Never Reused: No 00:07:49.805 Namespace Write Protected: No 00:07:49.805 Endurance group ID: 1 00:07:49.805 Number of LBA Formats: 8 00:07:49.805 Current LBA Format: LBA Format #04 00:07:49.805 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:49.805 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:49.805 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:49.805 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:49.805 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:49.805 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:49.805 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:49.805 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:49.805 00:07:49.805 Get Feature FDP: 00:07:49.805 ================ 00:07:49.805 Enabled: Yes 00:07:49.805 FDP configuration index: 0 00:07:49.805 00:07:49.805 FDP configurations log page 00:07:49.805 =========================== 00:07:49.805 Number of FDP configurations: 1 00:07:49.805 Version: 0 00:07:49.805 Size: 112 00:07:49.805 FDP Configuration Descriptor: 0 00:07:49.805 Descriptor Size: 96 00:07:49.805 Reclaim Group Identifier format: 2 00:07:49.805 FDP Volatile Write Cache: Not Present 00:07:49.805 FDP Configuration: Valid 00:07:49.805 Vendor Specific Size: 0 00:07:49.805 Number of Reclaim Groups: 2 00:07:49.805 Number of Recalim Unit Handles: 8 00:07:49.805 Max Placement Identifiers: 128 00:07:49.805 Number of Namespaces Suppprted: 256 00:07:49.805 Reclaim unit Nominal Size: 6000000 bytes 00:07:49.805 Estimated Reclaim Unit Time Limit: Not Reported 00:07:49.805 RUH Desc #000: RUH Type: Initially Isolated 00:07:49.805 RUH Desc #001: RUH Type: Initially Isolated 00:07:49.805 RUH Desc #002: RUH Type: Initially Isolated 00:07:49.805 RUH Desc #003: RUH Type: Initially Isolated 00:07:49.805 RUH Desc #004: RUH Type: Initially Isolated 00:07:49.805 RUH Desc #005: RUH Type: Initially Isolated 00:07:49.805 RUH Desc #006: RUH Type: Initially Isolated 00:07:49.805 RUH Desc #007: RUH Type: Initially Isolated 00:07:49.805 00:07:49.805 FDP reclaim unit handle usage log page 00:07:49.805 ====================================== 00:07:49.805 Number of Reclaim Unit Handles: 8 00:07:49.805 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:49.805 RUH Usage Desc #001: RUH Attributes: Unused 00:07:49.805 RUH Usage Desc #002: RUH Attributes: Unused 00:07:49.805 RUH Usage Desc #003: RUH Attributes: Unused 00:07:49.805 RUH Usage Desc #004: RUH Attributes: Unused 00:07:49.805 RUH Usage Desc #005: RUH Attributes: Unused 00:07:49.805 RUH Usage Desc #006: RUH Attributes: Unused 00:07:49.805 RUH Usage Desc #007: RUH Attributes: Unused 00:07:49.805 00:07:49.805 FDP statistics log page 00:07:49.805 ======================= 00:07:49.806 Host bytes with metadata written: 472563712 00:07:49.806 Media bytes with metadata written: 472629248 00:07:49.806 Media bytes erased: 0 00:07:49.806 00:07:49.806 FDP events log page 00:07:49.806 =================== 00:07:49.806 Number of FDP events: 0 00:07:49.806 00:07:49.806 NVM Specific Namespace Data 00:07:49.806 =========================== 00:07:49.806 Logical Block Storage Tag Mask: 0 00:07:49.806 Protection Information Capabilities: 00:07:49.806 16b Guard Protection Information Storage Tag Support: No 00:07:49.806 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:49.806 Storage Tag Check Read Support: No 00:07:49.806 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.806 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.806 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.806 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.806 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.806 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.806 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.806 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.806 00:07:49.806 real 0m1.225s 00:07:49.806 user 0m0.451s 00:07:49.806 sys 0m0.547s 00:07:49.806 14:00:51 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.806 14:00:51 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:49.806 ************************************ 00:07:49.806 END TEST nvme_identify 00:07:49.806 ************************************ 00:07:49.806 14:00:51 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:49.806 14:00:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.806 14:00:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.806 14:00:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.806 ************************************ 00:07:49.806 START TEST nvme_perf 00:07:49.806 ************************************ 00:07:49.806 14:00:51 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:49.806 14:00:51 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:51.181 Initializing NVMe Controllers 00:07:51.181 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.181 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.181 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.181 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.181 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:51.181 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:51.181 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:51.181 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:51.181 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:51.181 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:51.181 Initialization complete. Launching workers. 00:07:51.181 ======================================================== 00:07:51.181 Latency(us) 00:07:51.181 Device Information : IOPS MiB/s Average min max 00:07:51.181 PCIE (0000:00:10.0) NSID 1 from core 0: 18464.74 216.38 6941.31 5533.19 32706.43 00:07:51.181 PCIE (0000:00:11.0) NSID 1 from core 0: 18464.74 216.38 6932.06 5619.79 30950.32 00:07:51.181 PCIE (0000:00:13.0) NSID 1 from core 0: 18464.74 216.38 6921.54 5584.76 29572.60 00:07:51.181 PCIE (0000:00:12.0) NSID 1 from core 0: 18464.74 216.38 6910.84 5620.00 27787.32 00:07:51.181 PCIE (0000:00:12.0) NSID 2 from core 0: 18464.74 216.38 6900.17 5613.36 26012.70 00:07:51.181 PCIE (0000:00:12.0) NSID 3 from core 0: 18528.63 217.13 6865.71 5652.54 20870.40 00:07:51.181 ======================================================== 00:07:51.181 Total : 110852.33 1299.05 6911.91 5533.19 32706.43 00:07:51.181 00:07:51.181 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:51.181 ================================================================================= 00:07:51.181 1.00000% : 5696.591us 00:07:51.181 10.00000% : 6024.271us 00:07:51.181 25.00000% : 6225.920us 00:07:51.181 50.00000% : 6553.600us 00:07:51.181 75.00000% : 6856.074us 00:07:51.181 90.00000% : 7763.495us 00:07:51.181 95.00000% : 9729.575us 00:07:51.181 98.00000% : 11746.068us 00:07:51.181 99.00000% : 13308.849us 00:07:51.181 99.50000% : 27424.295us 00:07:51.181 99.90000% : 32263.877us 00:07:51.181 99.99000% : 32667.175us 00:07:51.181 99.99900% : 32868.825us 00:07:51.181 99.99990% : 32868.825us 00:07:51.181 99.99999% : 32868.825us 00:07:51.181 00:07:51.181 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:51.181 ================================================================================= 00:07:51.181 1.00000% : 5747.003us 00:07:51.181 10.00000% : 6074.683us 00:07:51.181 25.00000% : 6251.126us 00:07:51.181 50.00000% : 6553.600us 00:07:51.181 75.00000% : 6805.662us 00:07:51.181 90.00000% : 7864.320us 00:07:51.181 95.00000% : 9830.400us 00:07:51.181 98.00000% : 11594.831us 00:07:51.181 99.00000% : 13308.849us 00:07:51.181 99.50000% : 25710.277us 00:07:51.181 99.90000% : 30650.683us 00:07:51.181 99.99000% : 31053.982us 00:07:51.181 99.99900% : 31053.982us 00:07:51.181 99.99990% : 31053.982us 00:07:51.181 99.99999% : 31053.982us 00:07:51.181 00:07:51.181 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:51.181 ================================================================================= 00:07:51.181 1.00000% : 5747.003us 00:07:51.181 10.00000% : 6074.683us 00:07:51.181 25.00000% : 6251.126us 00:07:51.181 50.00000% : 6553.600us 00:07:51.181 75.00000% : 6805.662us 00:07:51.181 90.00000% : 7511.434us 00:07:51.181 95.00000% : 10032.049us 00:07:51.181 98.00000% : 11746.068us 00:07:51.181 99.00000% : 13712.148us 00:07:51.181 99.50000% : 24399.557us 00:07:51.181 99.90000% : 29239.138us 00:07:51.181 99.99000% : 29642.437us 00:07:51.181 99.99900% : 29642.437us 00:07:51.181 99.99990% : 29642.437us 00:07:51.181 99.99999% : 29642.437us 00:07:51.181 00:07:51.181 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:51.181 ================================================================================= 00:07:51.181 1.00000% : 5772.209us 00:07:51.181 10.00000% : 6074.683us 00:07:51.181 25.00000% : 6251.126us 00:07:51.181 50.00000% : 6553.600us 00:07:51.181 75.00000% : 6805.662us 00:07:51.182 90.00000% : 7612.258us 00:07:51.182 95.00000% : 9880.812us 00:07:51.182 98.00000% : 11746.068us 00:07:51.182 99.00000% : 13812.972us 00:07:51.182 99.50000% : 22584.714us 00:07:51.182 99.90000% : 27424.295us 00:07:51.182 99.99000% : 27827.594us 00:07:51.182 99.99900% : 27827.594us 00:07:51.182 99.99990% : 27827.594us 00:07:51.182 99.99999% : 27827.594us 00:07:51.182 00:07:51.182 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:51.182 ================================================================================= 00:07:51.182 1.00000% : 5772.209us 00:07:51.182 10.00000% : 6074.683us 00:07:51.182 25.00000% : 6251.126us 00:07:51.182 50.00000% : 6553.600us 00:07:51.182 75.00000% : 6805.662us 00:07:51.182 90.00000% : 7763.495us 00:07:51.182 95.00000% : 9679.163us 00:07:51.182 98.00000% : 11796.480us 00:07:51.182 99.00000% : 13208.025us 00:07:51.182 99.50000% : 20870.695us 00:07:51.182 99.90000% : 25609.452us 00:07:51.182 99.99000% : 26012.751us 00:07:51.182 99.99900% : 26012.751us 00:07:51.182 99.99990% : 26012.751us 00:07:51.182 99.99999% : 26012.751us 00:07:51.182 00:07:51.182 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:51.182 ================================================================================= 00:07:51.182 1.00000% : 5747.003us 00:07:51.182 10.00000% : 6074.683us 00:07:51.182 25.00000% : 6251.126us 00:07:51.182 50.00000% : 6553.600us 00:07:51.182 75.00000% : 6805.662us 00:07:51.182 90.00000% : 7813.908us 00:07:51.182 95.00000% : 9679.163us 00:07:51.182 98.00000% : 11846.892us 00:07:51.182 99.00000% : 13208.025us 00:07:51.182 99.50000% : 15728.640us 00:07:51.182 99.90000% : 20467.397us 00:07:51.182 99.99000% : 20870.695us 00:07:51.182 99.99900% : 20870.695us 00:07:51.182 99.99990% : 20870.695us 00:07:51.182 99.99999% : 20870.695us 00:07:51.182 00:07:51.182 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:51.182 ============================================================================== 00:07:51.182 Range in us Cumulative IO count 00:07:51.182 5520.148 - 5545.354: 0.0162% ( 3) 00:07:51.182 5545.354 - 5570.560: 0.0757% ( 11) 00:07:51.182 5570.560 - 5595.766: 0.2000% ( 23) 00:07:51.182 5595.766 - 5620.972: 0.4866% ( 53) 00:07:51.182 5620.972 - 5646.178: 0.7785% ( 54) 00:07:51.182 5646.178 - 5671.385: 0.9732% ( 36) 00:07:51.182 5671.385 - 5696.591: 1.2543% ( 52) 00:07:51.182 5696.591 - 5721.797: 1.4219% ( 31) 00:07:51.182 5721.797 - 5747.003: 1.7247% ( 56) 00:07:51.182 5747.003 - 5772.209: 2.0329% ( 57) 00:07:51.182 5772.209 - 5797.415: 2.4330% ( 74) 00:07:51.182 5797.415 - 5822.622: 2.9466% ( 95) 00:07:51.182 5822.622 - 5847.828: 3.5197% ( 106) 00:07:51.182 5847.828 - 5873.034: 4.1739% ( 121) 00:07:51.182 5873.034 - 5898.240: 5.0173% ( 156) 00:07:51.182 5898.240 - 5923.446: 5.9364% ( 170) 00:07:51.182 5923.446 - 5948.652: 7.1259% ( 220) 00:07:51.182 5948.652 - 5973.858: 8.3532% ( 227) 00:07:51.182 5973.858 - 5999.065: 9.7751% ( 263) 00:07:51.182 5999.065 - 6024.271: 11.3106% ( 284) 00:07:51.182 6024.271 - 6049.477: 13.0353% ( 319) 00:07:51.182 6049.477 - 6074.683: 14.6951% ( 307) 00:07:51.182 6074.683 - 6099.889: 16.5603% ( 345) 00:07:51.182 6099.889 - 6125.095: 18.3986% ( 340) 00:07:51.182 6125.095 - 6150.302: 20.1881% ( 331) 00:07:51.182 6150.302 - 6175.508: 22.2481% ( 381) 00:07:51.182 6175.508 - 6200.714: 24.1944% ( 360) 00:07:51.182 6200.714 - 6225.920: 26.3571% ( 400) 00:07:51.182 6225.920 - 6251.126: 28.3521% ( 369) 00:07:51.182 6251.126 - 6276.332: 30.4012% ( 379) 00:07:51.182 6276.332 - 6301.538: 32.2881% ( 349) 00:07:51.182 6301.538 - 6326.745: 34.4669% ( 403) 00:07:51.182 6326.745 - 6351.951: 36.4565% ( 368) 00:07:51.182 6351.951 - 6377.157: 38.6678% ( 409) 00:07:51.182 6377.157 - 6402.363: 40.6196% ( 361) 00:07:51.182 6402.363 - 6427.569: 42.7282% ( 390) 00:07:51.182 6427.569 - 6452.775: 44.9016% ( 402) 00:07:51.182 6452.775 - 6503.188: 49.1458% ( 785) 00:07:51.182 6503.188 - 6553.600: 53.3034% ( 769) 00:07:51.182 6553.600 - 6604.012: 57.3097% ( 741) 00:07:51.182 6604.012 - 6654.425: 61.0402% ( 690) 00:07:51.182 6654.425 - 6704.837: 64.9113% ( 716) 00:07:51.182 6704.837 - 6755.249: 68.7338% ( 707) 00:07:51.182 6755.249 - 6805.662: 72.2913% ( 658) 00:07:51.182 6805.662 - 6856.074: 75.6163% ( 615) 00:07:51.182 6856.074 - 6906.486: 78.6224% ( 556) 00:07:51.182 6906.486 - 6956.898: 81.1040% ( 459) 00:07:51.182 6956.898 - 7007.311: 83.1693% ( 382) 00:07:51.182 7007.311 - 7057.723: 84.6670% ( 277) 00:07:51.182 7057.723 - 7108.135: 85.9429% ( 236) 00:07:51.182 7108.135 - 7158.548: 86.8026% ( 159) 00:07:51.182 7158.548 - 7208.960: 87.2783% ( 88) 00:07:51.182 7208.960 - 7259.372: 87.7325% ( 84) 00:07:51.182 7259.372 - 7309.785: 88.0731% ( 63) 00:07:51.182 7309.785 - 7360.197: 88.3867% ( 58) 00:07:51.182 7360.197 - 7410.609: 88.6192% ( 43) 00:07:51.182 7410.609 - 7461.022: 88.8679% ( 46) 00:07:51.182 7461.022 - 7511.434: 89.0841% ( 40) 00:07:51.182 7511.434 - 7561.846: 89.3112% ( 42) 00:07:51.182 7561.846 - 7612.258: 89.4842% ( 32) 00:07:51.182 7612.258 - 7662.671: 89.6626% ( 33) 00:07:51.182 7662.671 - 7713.083: 89.8356% ( 32) 00:07:51.182 7713.083 - 7763.495: 90.0032% ( 31) 00:07:51.182 7763.495 - 7813.908: 90.1979% ( 36) 00:07:51.182 7813.908 - 7864.320: 90.3655% ( 31) 00:07:51.182 7864.320 - 7914.732: 90.5223% ( 29) 00:07:51.182 7914.732 - 7965.145: 90.6953% ( 32) 00:07:51.182 7965.145 - 8015.557: 90.8791% ( 34) 00:07:51.182 8015.557 - 8065.969: 91.0846% ( 38) 00:07:51.182 8065.969 - 8116.382: 91.2738% ( 35) 00:07:51.182 8116.382 - 8166.794: 91.4306% ( 29) 00:07:51.182 8166.794 - 8217.206: 91.5982% ( 31) 00:07:51.182 8217.206 - 8267.618: 91.7279% ( 24) 00:07:51.182 8267.618 - 8318.031: 91.8685% ( 26) 00:07:51.182 8318.031 - 8368.443: 92.0091% ( 26) 00:07:51.182 8368.443 - 8418.855: 92.1605% ( 28) 00:07:51.182 8418.855 - 8469.268: 92.2578% ( 18) 00:07:51.182 8469.268 - 8519.680: 92.4092% ( 28) 00:07:51.182 8519.680 - 8570.092: 92.5660% ( 29) 00:07:51.182 8570.092 - 8620.505: 92.7011% ( 25) 00:07:51.182 8620.505 - 8670.917: 92.8579% ( 29) 00:07:51.182 8670.917 - 8721.329: 92.9877% ( 24) 00:07:51.182 8721.329 - 8771.742: 93.0904% ( 19) 00:07:51.182 8771.742 - 8822.154: 93.2364% ( 27) 00:07:51.182 8822.154 - 8872.566: 93.3661% ( 24) 00:07:51.182 8872.566 - 8922.978: 93.5446% ( 33) 00:07:51.182 8922.978 - 8973.391: 93.7122% ( 31) 00:07:51.182 8973.391 - 9023.803: 93.8095% ( 18) 00:07:51.182 9023.803 - 9074.215: 93.8960% ( 16) 00:07:51.182 9074.215 - 9124.628: 93.9879% ( 17) 00:07:51.182 9124.628 - 9175.040: 94.0690% ( 15) 00:07:51.182 9175.040 - 9225.452: 94.1501% ( 15) 00:07:51.182 9225.452 - 9275.865: 94.2258% ( 14) 00:07:51.182 9275.865 - 9326.277: 94.3231% ( 18) 00:07:51.182 9326.277 - 9376.689: 94.4204% ( 18) 00:07:51.182 9376.689 - 9427.102: 94.5285% ( 20) 00:07:51.182 9427.102 - 9477.514: 94.6096% ( 15) 00:07:51.182 9477.514 - 9527.926: 94.6962% ( 16) 00:07:51.182 9527.926 - 9578.338: 94.7827% ( 16) 00:07:51.182 9578.338 - 9628.751: 94.8854% ( 19) 00:07:51.182 9628.751 - 9679.163: 94.9827% ( 18) 00:07:51.182 9679.163 - 9729.575: 95.0692% ( 16) 00:07:51.182 9729.575 - 9779.988: 95.1557% ( 16) 00:07:51.182 9779.988 - 9830.400: 95.2855% ( 24) 00:07:51.182 9830.400 - 9880.812: 95.3612% ( 14) 00:07:51.182 9880.812 - 9931.225: 95.4585% ( 18) 00:07:51.182 9931.225 - 9981.637: 95.5125% ( 10) 00:07:51.182 9981.637 - 10032.049: 95.6045% ( 17) 00:07:51.182 10032.049 - 10082.462: 95.6693% ( 12) 00:07:51.182 10082.462 - 10132.874: 95.7504% ( 15) 00:07:51.182 10132.874 - 10183.286: 95.8099% ( 11) 00:07:51.182 10183.286 - 10233.698: 95.9072% ( 18) 00:07:51.182 10233.698 - 10284.111: 96.0045% ( 18) 00:07:51.182 10284.111 - 10334.523: 96.1019% ( 18) 00:07:51.182 10334.523 - 10384.935: 96.1776% ( 14) 00:07:51.182 10384.935 - 10435.348: 96.2749% ( 18) 00:07:51.182 10435.348 - 10485.760: 96.3452% ( 13) 00:07:51.182 10485.760 - 10536.172: 96.4263% ( 15) 00:07:51.182 10536.172 - 10586.585: 96.5074% ( 15) 00:07:51.182 10586.585 - 10636.997: 96.5614% ( 10) 00:07:51.182 10636.997 - 10687.409: 96.6641% ( 19) 00:07:51.182 10687.409 - 10737.822: 96.7723% ( 20) 00:07:51.182 10737.822 - 10788.234: 96.8588% ( 16) 00:07:51.182 10788.234 - 10838.646: 96.9237% ( 12) 00:07:51.182 10838.646 - 10889.058: 96.9939% ( 13) 00:07:51.182 10889.058 - 10939.471: 97.0588% ( 12) 00:07:51.182 10939.471 - 10989.883: 97.1183% ( 11) 00:07:51.182 10989.883 - 11040.295: 97.1886% ( 13) 00:07:51.182 11040.295 - 11090.708: 97.2643% ( 14) 00:07:51.182 11090.708 - 11141.120: 97.3454% ( 15) 00:07:51.182 11141.120 - 11191.532: 97.4048% ( 11) 00:07:51.182 11191.532 - 11241.945: 97.4697% ( 12) 00:07:51.182 11241.945 - 11292.357: 97.5076% ( 7) 00:07:51.182 11292.357 - 11342.769: 97.5670% ( 11) 00:07:51.182 11342.769 - 11393.182: 97.6265% ( 11) 00:07:51.182 11393.182 - 11443.594: 97.6806% ( 10) 00:07:51.182 11443.594 - 11494.006: 97.7292% ( 9) 00:07:51.182 11494.006 - 11544.418: 97.7995% ( 13) 00:07:51.182 11544.418 - 11594.831: 97.8644% ( 12) 00:07:51.182 11594.831 - 11645.243: 97.9239% ( 11) 00:07:51.182 11645.243 - 11695.655: 97.9563% ( 6) 00:07:51.182 11695.655 - 11746.068: 98.0212% ( 12) 00:07:51.182 11746.068 - 11796.480: 98.0644% ( 8) 00:07:51.182 11796.480 - 11846.892: 98.1023% ( 7) 00:07:51.182 11846.892 - 11897.305: 98.1293% ( 5) 00:07:51.182 11897.305 - 11947.717: 98.1564% ( 5) 00:07:51.183 11947.717 - 11998.129: 98.1888% ( 6) 00:07:51.183 11998.129 - 12048.542: 98.1996% ( 2) 00:07:51.183 12048.542 - 12098.954: 98.2158% ( 3) 00:07:51.183 12098.954 - 12149.366: 98.2375% ( 4) 00:07:51.183 12149.366 - 12199.778: 98.2645% ( 5) 00:07:51.183 12199.778 - 12250.191: 98.3023% ( 7) 00:07:51.183 12250.191 - 12300.603: 98.3294% ( 5) 00:07:51.183 12300.603 - 12351.015: 98.3672% ( 7) 00:07:51.183 12351.015 - 12401.428: 98.3942% ( 5) 00:07:51.183 12401.428 - 12451.840: 98.4321% ( 7) 00:07:51.183 12451.840 - 12502.252: 98.4483% ( 3) 00:07:51.183 12502.252 - 12552.665: 98.4916% ( 8) 00:07:51.183 12552.665 - 12603.077: 98.5186% ( 5) 00:07:51.183 12603.077 - 12653.489: 98.5456% ( 5) 00:07:51.183 12653.489 - 12703.902: 98.5889% ( 8) 00:07:51.183 12703.902 - 12754.314: 98.6159% ( 5) 00:07:51.183 12754.314 - 12804.726: 98.6754% ( 11) 00:07:51.183 12804.726 - 12855.138: 98.7186% ( 8) 00:07:51.183 12855.138 - 12905.551: 98.7619% ( 8) 00:07:51.183 12905.551 - 13006.375: 98.8430% ( 15) 00:07:51.183 13006.375 - 13107.200: 98.9295% ( 16) 00:07:51.183 13107.200 - 13208.025: 98.9944% ( 12) 00:07:51.183 13208.025 - 13308.849: 99.0376% ( 8) 00:07:51.183 13308.849 - 13409.674: 99.0863% ( 9) 00:07:51.183 13409.674 - 13510.498: 99.1404% ( 10) 00:07:51.183 13510.498 - 13611.323: 99.1944% ( 10) 00:07:51.183 13611.323 - 13712.148: 99.2323% ( 7) 00:07:51.183 13712.148 - 13812.972: 99.2863% ( 10) 00:07:51.183 13812.972 - 13913.797: 99.3080% ( 4) 00:07:51.183 26416.049 - 26617.698: 99.3350% ( 5) 00:07:51.183 26617.698 - 26819.348: 99.3782% ( 8) 00:07:51.183 26819.348 - 27020.997: 99.4215% ( 8) 00:07:51.183 27020.997 - 27222.646: 99.4647% ( 8) 00:07:51.183 27222.646 - 27424.295: 99.5026% ( 7) 00:07:51.183 27424.295 - 27625.945: 99.5513% ( 9) 00:07:51.183 27625.945 - 27827.594: 99.5945% ( 8) 00:07:51.183 27827.594 - 28029.243: 99.6378% ( 8) 00:07:51.183 28029.243 - 28230.892: 99.6540% ( 3) 00:07:51.183 31053.982 - 31255.631: 99.6918% ( 7) 00:07:51.183 31255.631 - 31457.280: 99.7297% ( 7) 00:07:51.183 31457.280 - 31658.929: 99.7729% ( 8) 00:07:51.183 31658.929 - 31860.578: 99.8108% ( 7) 00:07:51.183 31860.578 - 32062.228: 99.8594% ( 9) 00:07:51.183 32062.228 - 32263.877: 99.9081% ( 9) 00:07:51.183 32263.877 - 32465.526: 99.9513% ( 8) 00:07:51.183 32465.526 - 32667.175: 99.9946% ( 8) 00:07:51.183 32667.175 - 32868.825: 100.0000% ( 1) 00:07:51.183 00:07:51.183 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:51.183 ============================================================================== 00:07:51.183 Range in us Cumulative IO count 00:07:51.183 5595.766 - 5620.972: 0.0108% ( 2) 00:07:51.183 5620.972 - 5646.178: 0.0703% ( 11) 00:07:51.183 5646.178 - 5671.385: 0.1244% ( 10) 00:07:51.183 5671.385 - 5696.591: 0.3406% ( 40) 00:07:51.183 5696.591 - 5721.797: 0.7191% ( 70) 00:07:51.183 5721.797 - 5747.003: 1.0381% ( 59) 00:07:51.183 5747.003 - 5772.209: 1.4219% ( 71) 00:07:51.183 5772.209 - 5797.415: 1.7571% ( 62) 00:07:51.183 5797.415 - 5822.622: 2.0329% ( 51) 00:07:51.183 5822.622 - 5847.828: 2.3302% ( 55) 00:07:51.183 5847.828 - 5873.034: 2.6871% ( 66) 00:07:51.183 5873.034 - 5898.240: 3.1683% ( 89) 00:07:51.183 5898.240 - 5923.446: 3.7846% ( 114) 00:07:51.183 5923.446 - 5948.652: 4.6605% ( 162) 00:07:51.183 5948.652 - 5973.858: 5.5363% ( 162) 00:07:51.183 5973.858 - 5999.065: 6.3635% ( 153) 00:07:51.183 5999.065 - 6024.271: 7.9423% ( 292) 00:07:51.183 6024.271 - 6049.477: 9.4291% ( 275) 00:07:51.183 6049.477 - 6074.683: 11.1754% ( 323) 00:07:51.183 6074.683 - 6099.889: 12.9109% ( 321) 00:07:51.183 6099.889 - 6125.095: 14.7383% ( 338) 00:07:51.183 6125.095 - 6150.302: 16.7279% ( 368) 00:07:51.183 6150.302 - 6175.508: 18.8311% ( 389) 00:07:51.183 6175.508 - 6200.714: 20.9180% ( 386) 00:07:51.183 6200.714 - 6225.920: 23.1618% ( 415) 00:07:51.183 6225.920 - 6251.126: 25.5136% ( 435) 00:07:51.183 6251.126 - 6276.332: 27.8547% ( 433) 00:07:51.183 6276.332 - 6301.538: 30.1741% ( 429) 00:07:51.183 6301.538 - 6326.745: 32.7044% ( 468) 00:07:51.183 6326.745 - 6351.951: 35.1752% ( 457) 00:07:51.183 6351.951 - 6377.157: 37.6946% ( 466) 00:07:51.183 6377.157 - 6402.363: 40.1817% ( 460) 00:07:51.183 6402.363 - 6427.569: 42.6903% ( 464) 00:07:51.183 6427.569 - 6452.775: 45.2098% ( 466) 00:07:51.183 6452.775 - 6503.188: 49.9567% ( 878) 00:07:51.183 6503.188 - 6553.600: 54.6064% ( 860) 00:07:51.183 6553.600 - 6604.012: 59.1641% ( 843) 00:07:51.183 6604.012 - 6654.425: 63.5867% ( 818) 00:07:51.183 6654.425 - 6704.837: 68.0039% ( 817) 00:07:51.183 6704.837 - 6755.249: 72.0588% ( 750) 00:07:51.183 6755.249 - 6805.662: 75.7731% ( 687) 00:07:51.183 6805.662 - 6856.074: 78.9684% ( 591) 00:07:51.183 6856.074 - 6906.486: 81.5960% ( 486) 00:07:51.183 6906.486 - 6956.898: 83.5316% ( 358) 00:07:51.183 6956.898 - 7007.311: 85.0779% ( 286) 00:07:51.183 7007.311 - 7057.723: 86.1159% ( 192) 00:07:51.183 7057.723 - 7108.135: 86.7755% ( 122) 00:07:51.183 7108.135 - 7158.548: 87.1702% ( 73) 00:07:51.183 7158.548 - 7208.960: 87.5487% ( 70) 00:07:51.183 7208.960 - 7259.372: 87.8352% ( 53) 00:07:51.183 7259.372 - 7309.785: 88.1109% ( 51) 00:07:51.183 7309.785 - 7360.197: 88.3975% ( 53) 00:07:51.183 7360.197 - 7410.609: 88.6083% ( 39) 00:07:51.183 7410.609 - 7461.022: 88.7814% ( 32) 00:07:51.183 7461.022 - 7511.434: 88.9544% ( 32) 00:07:51.183 7511.434 - 7561.846: 89.1166% ( 30) 00:07:51.183 7561.846 - 7612.258: 89.3004% ( 34) 00:07:51.183 7612.258 - 7662.671: 89.4626% ( 30) 00:07:51.183 7662.671 - 7713.083: 89.6518% ( 35) 00:07:51.183 7713.083 - 7763.495: 89.8302% ( 33) 00:07:51.183 7763.495 - 7813.908: 89.9978% ( 31) 00:07:51.183 7813.908 - 7864.320: 90.1925% ( 36) 00:07:51.183 7864.320 - 7914.732: 90.3871% ( 36) 00:07:51.183 7914.732 - 7965.145: 90.5493% ( 30) 00:07:51.183 7965.145 - 8015.557: 90.7061% ( 29) 00:07:51.183 8015.557 - 8065.969: 90.9170% ( 39) 00:07:51.183 8065.969 - 8116.382: 91.1116% ( 36) 00:07:51.183 8116.382 - 8166.794: 91.3116% ( 37) 00:07:51.183 8166.794 - 8217.206: 91.5387% ( 42) 00:07:51.183 8217.206 - 8267.618: 91.7225% ( 34) 00:07:51.183 8267.618 - 8318.031: 91.9064% ( 34) 00:07:51.183 8318.031 - 8368.443: 92.0740% ( 31) 00:07:51.183 8368.443 - 8418.855: 92.2524% ( 33) 00:07:51.183 8418.855 - 8469.268: 92.4254% ( 32) 00:07:51.183 8469.268 - 8519.680: 92.5660% ( 26) 00:07:51.183 8519.680 - 8570.092: 92.7444% ( 33) 00:07:51.183 8570.092 - 8620.505: 92.9012% ( 29) 00:07:51.183 8620.505 - 8670.917: 92.9877% ( 16) 00:07:51.183 8670.917 - 8721.329: 93.1066% ( 22) 00:07:51.183 8721.329 - 8771.742: 93.2256% ( 22) 00:07:51.183 8771.742 - 8822.154: 93.3391% ( 21) 00:07:51.183 8822.154 - 8872.566: 93.4689% ( 24) 00:07:51.183 8872.566 - 8922.978: 93.5175% ( 9) 00:07:51.183 8922.978 - 8973.391: 93.5770% ( 11) 00:07:51.183 8973.391 - 9023.803: 93.6473% ( 13) 00:07:51.183 9023.803 - 9074.215: 93.7176% ( 13) 00:07:51.183 9074.215 - 9124.628: 93.8203% ( 19) 00:07:51.183 9124.628 - 9175.040: 93.9014% ( 15) 00:07:51.183 9175.040 - 9225.452: 93.9825% ( 15) 00:07:51.183 9225.452 - 9275.865: 94.0744% ( 17) 00:07:51.183 9275.865 - 9326.277: 94.1609% ( 16) 00:07:51.183 9326.277 - 9376.689: 94.2528% ( 17) 00:07:51.183 9376.689 - 9427.102: 94.3447% ( 17) 00:07:51.183 9427.102 - 9477.514: 94.4312% ( 16) 00:07:51.183 9477.514 - 9527.926: 94.5177% ( 16) 00:07:51.183 9527.926 - 9578.338: 94.5988% ( 15) 00:07:51.183 9578.338 - 9628.751: 94.6853% ( 16) 00:07:51.183 9628.751 - 9679.163: 94.7772% ( 17) 00:07:51.183 9679.163 - 9729.575: 94.8475% ( 13) 00:07:51.183 9729.575 - 9779.988: 94.9286% ( 15) 00:07:51.183 9779.988 - 9830.400: 95.0151% ( 16) 00:07:51.183 9830.400 - 9880.812: 95.0908% ( 14) 00:07:51.183 9880.812 - 9931.225: 95.1827% ( 17) 00:07:51.183 9931.225 - 9981.637: 95.2801% ( 18) 00:07:51.183 9981.637 - 10032.049: 95.3612% ( 15) 00:07:51.183 10032.049 - 10082.462: 95.4477% ( 16) 00:07:51.183 10082.462 - 10132.874: 95.5396% ( 17) 00:07:51.183 10132.874 - 10183.286: 95.6261% ( 16) 00:07:51.183 10183.286 - 10233.698: 95.7180% ( 17) 00:07:51.183 10233.698 - 10284.111: 95.8207% ( 19) 00:07:51.183 10284.111 - 10334.523: 95.8910% ( 13) 00:07:51.183 10334.523 - 10384.935: 95.9667% ( 14) 00:07:51.183 10384.935 - 10435.348: 96.0586% ( 17) 00:07:51.183 10435.348 - 10485.760: 96.1613% ( 19) 00:07:51.183 10485.760 - 10536.172: 96.2749% ( 21) 00:07:51.183 10536.172 - 10586.585: 96.3668% ( 17) 00:07:51.183 10586.585 - 10636.997: 96.4695% ( 19) 00:07:51.183 10636.997 - 10687.409: 96.6263% ( 29) 00:07:51.183 10687.409 - 10737.822: 96.7398% ( 21) 00:07:51.183 10737.822 - 10788.234: 96.8263% ( 16) 00:07:51.183 10788.234 - 10838.646: 96.9074% ( 15) 00:07:51.183 10838.646 - 10889.058: 96.9885% ( 15) 00:07:51.183 10889.058 - 10939.471: 97.0804% ( 17) 00:07:51.183 10939.471 - 10989.883: 97.1561% ( 14) 00:07:51.184 10989.883 - 11040.295: 97.2318% ( 14) 00:07:51.184 11040.295 - 11090.708: 97.3183% ( 16) 00:07:51.184 11090.708 - 11141.120: 97.4157% ( 18) 00:07:51.184 11141.120 - 11191.532: 97.5076% ( 17) 00:07:51.184 11191.532 - 11241.945: 97.5887% ( 15) 00:07:51.184 11241.945 - 11292.357: 97.6752% ( 16) 00:07:51.184 11292.357 - 11342.769: 97.7509% ( 14) 00:07:51.184 11342.769 - 11393.182: 97.8320% ( 15) 00:07:51.184 11393.182 - 11443.594: 97.9022% ( 13) 00:07:51.184 11443.594 - 11494.006: 97.9455% ( 8) 00:07:51.184 11494.006 - 11544.418: 97.9942% ( 9) 00:07:51.184 11544.418 - 11594.831: 98.0374% ( 8) 00:07:51.184 11594.831 - 11645.243: 98.0753% ( 7) 00:07:51.184 11645.243 - 11695.655: 98.1239% ( 9) 00:07:51.184 11695.655 - 11746.068: 98.1618% ( 7) 00:07:51.184 11746.068 - 11796.480: 98.2104% ( 9) 00:07:51.184 11796.480 - 11846.892: 98.2537% ( 8) 00:07:51.184 11846.892 - 11897.305: 98.2969% ( 8) 00:07:51.184 11897.305 - 11947.717: 98.3240% ( 5) 00:07:51.184 11947.717 - 11998.129: 98.3456% ( 4) 00:07:51.184 11998.129 - 12048.542: 98.3726% ( 5) 00:07:51.184 12048.542 - 12098.954: 98.3942% ( 4) 00:07:51.184 12098.954 - 12149.366: 98.4159% ( 4) 00:07:51.184 12149.366 - 12199.778: 98.4375% ( 4) 00:07:51.184 12199.778 - 12250.191: 98.4591% ( 4) 00:07:51.184 12250.191 - 12300.603: 98.4808% ( 4) 00:07:51.184 12300.603 - 12351.015: 98.5024% ( 4) 00:07:51.184 12351.015 - 12401.428: 98.5186% ( 3) 00:07:51.184 12401.428 - 12451.840: 98.5294% ( 2) 00:07:51.184 12451.840 - 12502.252: 98.5402% ( 2) 00:07:51.184 12502.252 - 12552.665: 98.5564% ( 3) 00:07:51.184 12552.665 - 12603.077: 98.5673% ( 2) 00:07:51.184 12603.077 - 12653.489: 98.5781% ( 2) 00:07:51.184 12653.489 - 12703.902: 98.5889% ( 2) 00:07:51.184 12703.902 - 12754.314: 98.6159% ( 5) 00:07:51.184 12754.314 - 12804.726: 98.6646% ( 9) 00:07:51.184 12804.726 - 12855.138: 98.7078% ( 8) 00:07:51.184 12855.138 - 12905.551: 98.7565% ( 9) 00:07:51.184 12905.551 - 13006.375: 98.8322% ( 14) 00:07:51.184 13006.375 - 13107.200: 98.9133% ( 15) 00:07:51.184 13107.200 - 13208.025: 98.9836% ( 13) 00:07:51.184 13208.025 - 13308.849: 99.0647% ( 15) 00:07:51.184 13308.849 - 13409.674: 99.1404% ( 14) 00:07:51.184 13409.674 - 13510.498: 99.2160% ( 14) 00:07:51.184 13510.498 - 13611.323: 99.2809% ( 12) 00:07:51.184 13611.323 - 13712.148: 99.3080% ( 5) 00:07:51.184 24802.855 - 24903.680: 99.3242% ( 3) 00:07:51.184 24903.680 - 25004.505: 99.3458% ( 4) 00:07:51.184 25004.505 - 25105.329: 99.3728% ( 5) 00:07:51.184 25105.329 - 25206.154: 99.3945% ( 4) 00:07:51.184 25206.154 - 25306.978: 99.4161% ( 4) 00:07:51.184 25306.978 - 25407.803: 99.4377% ( 4) 00:07:51.184 25407.803 - 25508.628: 99.4647% ( 5) 00:07:51.184 25508.628 - 25609.452: 99.4810% ( 3) 00:07:51.184 25609.452 - 25710.277: 99.5026% ( 4) 00:07:51.184 25710.277 - 25811.102: 99.5242% ( 4) 00:07:51.184 25811.102 - 26012.751: 99.5729% ( 9) 00:07:51.184 26012.751 - 26214.400: 99.6215% ( 9) 00:07:51.184 26214.400 - 26416.049: 99.6540% ( 6) 00:07:51.184 29440.788 - 29642.437: 99.6972% ( 8) 00:07:51.184 29642.437 - 29844.086: 99.7405% ( 8) 00:07:51.184 29844.086 - 30045.735: 99.7891% ( 9) 00:07:51.184 30045.735 - 30247.385: 99.8324% ( 8) 00:07:51.184 30247.385 - 30449.034: 99.8811% ( 9) 00:07:51.184 30449.034 - 30650.683: 99.9243% ( 8) 00:07:51.184 30650.683 - 30852.332: 99.9730% ( 9) 00:07:51.184 30852.332 - 31053.982: 100.0000% ( 5) 00:07:51.184 00:07:51.184 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:51.184 ============================================================================== 00:07:51.184 Range in us Cumulative IO count 00:07:51.184 5570.560 - 5595.766: 0.0108% ( 2) 00:07:51.184 5595.766 - 5620.972: 0.0487% ( 7) 00:07:51.184 5620.972 - 5646.178: 0.1730% ( 23) 00:07:51.184 5646.178 - 5671.385: 0.3460% ( 32) 00:07:51.184 5671.385 - 5696.591: 0.5731% ( 42) 00:07:51.184 5696.591 - 5721.797: 0.8326% ( 48) 00:07:51.184 5721.797 - 5747.003: 1.0813% ( 46) 00:07:51.184 5747.003 - 5772.209: 1.3679% ( 53) 00:07:51.184 5772.209 - 5797.415: 1.6382% ( 50) 00:07:51.184 5797.415 - 5822.622: 1.9842% ( 64) 00:07:51.184 5822.622 - 5847.828: 2.3410% ( 66) 00:07:51.184 5847.828 - 5873.034: 2.7411% ( 74) 00:07:51.184 5873.034 - 5898.240: 3.2331% ( 91) 00:07:51.184 5898.240 - 5923.446: 3.7359% ( 93) 00:07:51.184 5923.446 - 5948.652: 4.4118% ( 125) 00:07:51.184 5948.652 - 5973.858: 5.2606% ( 157) 00:07:51.184 5973.858 - 5999.065: 6.2608% ( 185) 00:07:51.184 5999.065 - 6024.271: 7.5043% ( 230) 00:07:51.184 6024.271 - 6049.477: 9.0344% ( 283) 00:07:51.184 6049.477 - 6074.683: 10.7699% ( 321) 00:07:51.184 6074.683 - 6099.889: 12.6568% ( 349) 00:07:51.184 6099.889 - 6125.095: 14.5545% ( 351) 00:07:51.184 6125.095 - 6150.302: 16.8144% ( 418) 00:07:51.184 6150.302 - 6175.508: 19.0960% ( 422) 00:07:51.184 6175.508 - 6200.714: 21.0748% ( 366) 00:07:51.184 6200.714 - 6225.920: 23.0915% ( 373) 00:07:51.184 6225.920 - 6251.126: 25.3244% ( 413) 00:07:51.184 6251.126 - 6276.332: 27.9141% ( 479) 00:07:51.184 6276.332 - 6301.538: 30.3471% ( 450) 00:07:51.184 6301.538 - 6326.745: 32.8990% ( 472) 00:07:51.184 6326.745 - 6351.951: 35.2076% ( 427) 00:07:51.184 6351.951 - 6377.157: 37.6135% ( 445) 00:07:51.184 6377.157 - 6402.363: 40.0627% ( 453) 00:07:51.184 6402.363 - 6427.569: 42.5173% ( 454) 00:07:51.184 6427.569 - 6452.775: 44.9989% ( 459) 00:07:51.184 6452.775 - 6503.188: 49.7783% ( 884) 00:07:51.184 6503.188 - 6553.600: 54.4496% ( 864) 00:07:51.184 6553.600 - 6604.012: 58.9857% ( 839) 00:07:51.184 6604.012 - 6654.425: 63.5597% ( 846) 00:07:51.184 6654.425 - 6704.837: 67.9877% ( 819) 00:07:51.184 6704.837 - 6755.249: 72.1021% ( 761) 00:07:51.184 6755.249 - 6805.662: 75.9570% ( 713) 00:07:51.184 6805.662 - 6856.074: 79.2604% ( 611) 00:07:51.184 6856.074 - 6906.486: 81.8285% ( 475) 00:07:51.184 6906.486 - 6956.898: 83.8830% ( 380) 00:07:51.184 6956.898 - 7007.311: 85.4022% ( 281) 00:07:51.184 7007.311 - 7057.723: 86.5917% ( 220) 00:07:51.184 7057.723 - 7108.135: 87.4892% ( 166) 00:07:51.184 7108.135 - 7158.548: 88.0082% ( 96) 00:07:51.184 7158.548 - 7208.960: 88.5381% ( 98) 00:07:51.184 7208.960 - 7259.372: 88.9436% ( 75) 00:07:51.184 7259.372 - 7309.785: 89.2409% ( 55) 00:07:51.184 7309.785 - 7360.197: 89.5112% ( 50) 00:07:51.184 7360.197 - 7410.609: 89.7491% ( 44) 00:07:51.184 7410.609 - 7461.022: 89.9924% ( 45) 00:07:51.184 7461.022 - 7511.434: 90.1817% ( 35) 00:07:51.184 7511.434 - 7561.846: 90.3493% ( 31) 00:07:51.184 7561.846 - 7612.258: 90.5115% ( 30) 00:07:51.184 7612.258 - 7662.671: 90.6250% ( 21) 00:07:51.184 7662.671 - 7713.083: 90.7385% ( 21) 00:07:51.184 7713.083 - 7763.495: 90.8413% ( 19) 00:07:51.184 7763.495 - 7813.908: 90.9818% ( 26) 00:07:51.184 7813.908 - 7864.320: 91.1224% ( 26) 00:07:51.184 7864.320 - 7914.732: 91.2305% ( 20) 00:07:51.184 7914.732 - 7965.145: 91.3224% ( 17) 00:07:51.184 7965.145 - 8015.557: 91.3981% ( 14) 00:07:51.184 8015.557 - 8065.969: 91.4738% ( 14) 00:07:51.184 8065.969 - 8116.382: 91.5766% ( 19) 00:07:51.184 8116.382 - 8166.794: 91.6901% ( 21) 00:07:51.184 8166.794 - 8217.206: 91.7712% ( 15) 00:07:51.184 8217.206 - 8267.618: 91.8685% ( 18) 00:07:51.184 8267.618 - 8318.031: 91.9766% ( 20) 00:07:51.184 8318.031 - 8368.443: 92.0848% ( 20) 00:07:51.184 8368.443 - 8418.855: 92.1983% ( 21) 00:07:51.184 8418.855 - 8469.268: 92.3010% ( 19) 00:07:51.184 8469.268 - 8519.680: 92.4524% ( 28) 00:07:51.184 8519.680 - 8570.092: 92.5660% ( 21) 00:07:51.184 8570.092 - 8620.505: 92.6795% ( 21) 00:07:51.184 8620.505 - 8670.917: 92.7768% ( 18) 00:07:51.184 8670.917 - 8721.329: 92.8741% ( 18) 00:07:51.184 8721.329 - 8771.742: 92.9823% ( 20) 00:07:51.184 8771.742 - 8822.154: 93.0958% ( 21) 00:07:51.184 8822.154 - 8872.566: 93.2093% ( 21) 00:07:51.184 8872.566 - 8922.978: 93.3337% ( 23) 00:07:51.184 8922.978 - 8973.391: 93.4202% ( 16) 00:07:51.184 8973.391 - 9023.803: 93.5013% ( 15) 00:07:51.184 9023.803 - 9074.215: 93.5824% ( 15) 00:07:51.184 9074.215 - 9124.628: 93.6527% ( 13) 00:07:51.184 9124.628 - 9175.040: 93.6851% ( 6) 00:07:51.184 9175.040 - 9225.452: 93.7284% ( 8) 00:07:51.184 9225.452 - 9275.865: 93.7716% ( 8) 00:07:51.184 9275.865 - 9326.277: 93.8095% ( 7) 00:07:51.184 9326.277 - 9376.689: 93.8419% ( 6) 00:07:51.184 9376.689 - 9427.102: 93.8635% ( 4) 00:07:51.184 9427.102 - 9477.514: 93.9068% ( 8) 00:07:51.184 9477.514 - 9527.926: 93.9663% ( 11) 00:07:51.184 9527.926 - 9578.338: 94.0095% ( 8) 00:07:51.184 9578.338 - 9628.751: 94.0906% ( 15) 00:07:51.184 9628.751 - 9679.163: 94.1879% ( 18) 00:07:51.184 9679.163 - 9729.575: 94.3015% ( 21) 00:07:51.184 9729.575 - 9779.988: 94.4204% ( 22) 00:07:51.184 9779.988 - 9830.400: 94.5448% ( 23) 00:07:51.184 9830.400 - 9880.812: 94.6691% ( 23) 00:07:51.184 9880.812 - 9931.225: 94.7610% ( 17) 00:07:51.184 9931.225 - 9981.637: 94.8854% ( 23) 00:07:51.184 9981.637 - 10032.049: 95.0043% ( 22) 00:07:51.184 10032.049 - 10082.462: 95.1665% ( 30) 00:07:51.184 10082.462 - 10132.874: 95.2909% ( 23) 00:07:51.184 10132.874 - 10183.286: 95.4206% ( 24) 00:07:51.184 10183.286 - 10233.698: 95.5558% ( 25) 00:07:51.184 10233.698 - 10284.111: 95.6639% ( 20) 00:07:51.184 10284.111 - 10334.523: 95.7829% ( 22) 00:07:51.184 10334.523 - 10384.935: 95.8964% ( 21) 00:07:51.184 10384.935 - 10435.348: 96.0424% ( 27) 00:07:51.184 10435.348 - 10485.760: 96.1992% ( 29) 00:07:51.184 10485.760 - 10536.172: 96.3397% ( 26) 00:07:51.184 10536.172 - 10586.585: 96.4695% ( 24) 00:07:51.184 10586.585 - 10636.997: 96.5776% ( 20) 00:07:51.184 10636.997 - 10687.409: 96.6696% ( 17) 00:07:51.184 10687.409 - 10737.822: 96.7669% ( 18) 00:07:51.184 10737.822 - 10788.234: 96.8750% ( 20) 00:07:51.184 10788.234 - 10838.646: 96.9669% ( 17) 00:07:51.185 10838.646 - 10889.058: 97.0534% ( 16) 00:07:51.185 10889.058 - 10939.471: 97.1615% ( 20) 00:07:51.185 10939.471 - 10989.883: 97.2372% ( 14) 00:07:51.185 10989.883 - 11040.295: 97.3075% ( 13) 00:07:51.185 11040.295 - 11090.708: 97.3670% ( 11) 00:07:51.185 11090.708 - 11141.120: 97.4373% ( 13) 00:07:51.185 11141.120 - 11191.532: 97.4968% ( 11) 00:07:51.185 11191.532 - 11241.945: 97.5670% ( 13) 00:07:51.185 11241.945 - 11292.357: 97.6265% ( 11) 00:07:51.185 11292.357 - 11342.769: 97.6590% ( 6) 00:07:51.185 11342.769 - 11393.182: 97.6914% ( 6) 00:07:51.185 11393.182 - 11443.594: 97.7238% ( 6) 00:07:51.185 11443.594 - 11494.006: 97.7779% ( 10) 00:07:51.185 11494.006 - 11544.418: 97.8266% ( 9) 00:07:51.185 11544.418 - 11594.831: 97.8644% ( 7) 00:07:51.185 11594.831 - 11645.243: 97.9185% ( 10) 00:07:51.185 11645.243 - 11695.655: 97.9671% ( 9) 00:07:51.185 11695.655 - 11746.068: 98.0050% ( 7) 00:07:51.185 11746.068 - 11796.480: 98.0428% ( 7) 00:07:51.185 11796.480 - 11846.892: 98.0807% ( 7) 00:07:51.185 11846.892 - 11897.305: 98.1077% ( 5) 00:07:51.185 11897.305 - 11947.717: 98.1401% ( 6) 00:07:51.185 11947.717 - 11998.129: 98.1726% ( 6) 00:07:51.185 11998.129 - 12048.542: 98.2158% ( 8) 00:07:51.185 12048.542 - 12098.954: 98.2591% ( 8) 00:07:51.185 12098.954 - 12149.366: 98.2861% ( 5) 00:07:51.185 12149.366 - 12199.778: 98.3240% ( 7) 00:07:51.185 12199.778 - 12250.191: 98.3564% ( 6) 00:07:51.185 12250.191 - 12300.603: 98.3834% ( 5) 00:07:51.185 12300.603 - 12351.015: 98.4159% ( 6) 00:07:51.185 12351.015 - 12401.428: 98.4483% ( 6) 00:07:51.185 12401.428 - 12451.840: 98.4862% ( 7) 00:07:51.185 12451.840 - 12502.252: 98.5186% ( 6) 00:07:51.185 12502.252 - 12552.665: 98.5510% ( 6) 00:07:51.185 12552.665 - 12603.077: 98.5889% ( 7) 00:07:51.185 12603.077 - 12653.489: 98.6159% ( 5) 00:07:51.185 12653.489 - 12703.902: 98.6484% ( 6) 00:07:51.185 12703.902 - 12754.314: 98.6808% ( 6) 00:07:51.185 12754.314 - 12804.726: 98.7024% ( 4) 00:07:51.185 12804.726 - 12855.138: 98.7186% ( 3) 00:07:51.185 12855.138 - 12905.551: 98.7349% ( 3) 00:07:51.185 12905.551 - 13006.375: 98.7727% ( 7) 00:07:51.185 13006.375 - 13107.200: 98.8106% ( 7) 00:07:51.185 13107.200 - 13208.025: 98.8430% ( 6) 00:07:51.185 13208.025 - 13308.849: 98.8808% ( 7) 00:07:51.185 13308.849 - 13409.674: 98.9187% ( 7) 00:07:51.185 13409.674 - 13510.498: 98.9565% ( 7) 00:07:51.185 13510.498 - 13611.323: 98.9836% ( 5) 00:07:51.185 13611.323 - 13712.148: 99.0160% ( 6) 00:07:51.185 13712.148 - 13812.972: 99.0484% ( 6) 00:07:51.185 13812.972 - 13913.797: 99.0863% ( 7) 00:07:51.185 13913.797 - 14014.622: 99.1187% ( 6) 00:07:51.185 14014.622 - 14115.446: 99.1566% ( 7) 00:07:51.185 14115.446 - 14216.271: 99.1944% ( 7) 00:07:51.185 14216.271 - 14317.095: 99.2323% ( 7) 00:07:51.185 14317.095 - 14417.920: 99.2647% ( 6) 00:07:51.185 14417.920 - 14518.745: 99.3026% ( 7) 00:07:51.185 14518.745 - 14619.569: 99.3080% ( 1) 00:07:51.185 23492.135 - 23592.960: 99.3242% ( 3) 00:07:51.185 23592.960 - 23693.785: 99.3458% ( 4) 00:07:51.185 23693.785 - 23794.609: 99.3674% ( 4) 00:07:51.185 23794.609 - 23895.434: 99.3891% ( 4) 00:07:51.185 23895.434 - 23996.258: 99.4107% ( 4) 00:07:51.185 23996.258 - 24097.083: 99.4323% ( 4) 00:07:51.185 24097.083 - 24197.908: 99.4539% ( 4) 00:07:51.185 24197.908 - 24298.732: 99.4810% ( 5) 00:07:51.185 24298.732 - 24399.557: 99.5026% ( 4) 00:07:51.185 24399.557 - 24500.382: 99.5242% ( 4) 00:07:51.185 24500.382 - 24601.206: 99.5458% ( 4) 00:07:51.185 24601.206 - 24702.031: 99.5675% ( 4) 00:07:51.185 24702.031 - 24802.855: 99.5891% ( 4) 00:07:51.185 24802.855 - 24903.680: 99.6107% ( 4) 00:07:51.185 24903.680 - 25004.505: 99.6324% ( 4) 00:07:51.185 25004.505 - 25105.329: 99.6540% ( 4) 00:07:51.185 28029.243 - 28230.892: 99.6972% ( 8) 00:07:51.185 28230.892 - 28432.542: 99.7405% ( 8) 00:07:51.185 28432.542 - 28634.191: 99.7891% ( 9) 00:07:51.185 28634.191 - 28835.840: 99.8324% ( 8) 00:07:51.185 28835.840 - 29037.489: 99.8756% ( 8) 00:07:51.185 29037.489 - 29239.138: 99.9189% ( 8) 00:07:51.185 29239.138 - 29440.788: 99.9676% ( 9) 00:07:51.185 29440.788 - 29642.437: 100.0000% ( 6) 00:07:51.185 00:07:51.185 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:51.185 ============================================================================== 00:07:51.185 Range in us Cumulative IO count 00:07:51.185 5595.766 - 5620.972: 0.0054% ( 1) 00:07:51.185 5620.972 - 5646.178: 0.0216% ( 3) 00:07:51.185 5646.178 - 5671.385: 0.1189% ( 18) 00:07:51.185 5671.385 - 5696.591: 0.2703% ( 28) 00:07:51.185 5696.591 - 5721.797: 0.4379% ( 31) 00:07:51.185 5721.797 - 5747.003: 0.7515% ( 58) 00:07:51.185 5747.003 - 5772.209: 1.1570% ( 75) 00:07:51.185 5772.209 - 5797.415: 1.6814% ( 97) 00:07:51.185 5797.415 - 5822.622: 2.2113% ( 98) 00:07:51.185 5822.622 - 5847.828: 2.6114% ( 74) 00:07:51.185 5847.828 - 5873.034: 2.9898% ( 70) 00:07:51.185 5873.034 - 5898.240: 3.4926% ( 93) 00:07:51.185 5898.240 - 5923.446: 4.0009% ( 94) 00:07:51.185 5923.446 - 5948.652: 4.6659% ( 123) 00:07:51.185 5948.652 - 5973.858: 5.4823% ( 151) 00:07:51.185 5973.858 - 5999.065: 6.5149% ( 191) 00:07:51.185 5999.065 - 6024.271: 7.5476% ( 191) 00:07:51.185 6024.271 - 6049.477: 9.0019% ( 269) 00:07:51.185 6049.477 - 6074.683: 10.7050% ( 315) 00:07:51.185 6074.683 - 6099.889: 12.6135% ( 353) 00:07:51.185 6099.889 - 6125.095: 14.7870% ( 402) 00:07:51.185 6125.095 - 6150.302: 16.9875% ( 407) 00:07:51.185 6150.302 - 6175.508: 19.1176% ( 394) 00:07:51.185 6175.508 - 6200.714: 21.2100% ( 387) 00:07:51.185 6200.714 - 6225.920: 23.1996% ( 368) 00:07:51.185 6225.920 - 6251.126: 25.4217% ( 411) 00:07:51.185 6251.126 - 6276.332: 27.8276% ( 445) 00:07:51.185 6276.332 - 6301.538: 30.2173% ( 442) 00:07:51.185 6301.538 - 6326.745: 32.6557% ( 451) 00:07:51.185 6326.745 - 6351.951: 35.0779% ( 448) 00:07:51.185 6351.951 - 6377.157: 37.4351% ( 436) 00:07:51.185 6377.157 - 6402.363: 39.9059% ( 457) 00:07:51.185 6402.363 - 6427.569: 42.5011% ( 480) 00:07:51.185 6427.569 - 6452.775: 45.0854% ( 478) 00:07:51.185 6452.775 - 6503.188: 49.8108% ( 874) 00:07:51.185 6503.188 - 6553.600: 54.4010% ( 849) 00:07:51.185 6553.600 - 6604.012: 58.8073% ( 815) 00:07:51.185 6604.012 - 6654.425: 63.2569% ( 823) 00:07:51.185 6654.425 - 6704.837: 67.6417% ( 811) 00:07:51.185 6704.837 - 6755.249: 71.7669% ( 763) 00:07:51.185 6755.249 - 6805.662: 75.6001% ( 709) 00:07:51.185 6805.662 - 6856.074: 78.8765% ( 606) 00:07:51.185 6856.074 - 6906.486: 81.5095% ( 487) 00:07:51.185 6906.486 - 6956.898: 83.6289% ( 392) 00:07:51.185 6956.898 - 7007.311: 85.1319% ( 278) 00:07:51.185 7007.311 - 7057.723: 86.3214% ( 220) 00:07:51.185 7057.723 - 7108.135: 87.1378% ( 151) 00:07:51.185 7108.135 - 7158.548: 87.7000% ( 104) 00:07:51.185 7158.548 - 7208.960: 88.1434% ( 82) 00:07:51.185 7208.960 - 7259.372: 88.4624% ( 59) 00:07:51.185 7259.372 - 7309.785: 88.7705% ( 57) 00:07:51.185 7309.785 - 7360.197: 89.0301% ( 48) 00:07:51.185 7360.197 - 7410.609: 89.2734% ( 45) 00:07:51.185 7410.609 - 7461.022: 89.4896% ( 40) 00:07:51.185 7461.022 - 7511.434: 89.7275% ( 44) 00:07:51.185 7511.434 - 7561.846: 89.9546% ( 42) 00:07:51.185 7561.846 - 7612.258: 90.1276% ( 32) 00:07:51.185 7612.258 - 7662.671: 90.3439% ( 40) 00:07:51.185 7662.671 - 7713.083: 90.5006% ( 29) 00:07:51.185 7713.083 - 7763.495: 90.6628% ( 30) 00:07:51.185 7763.495 - 7813.908: 90.7710% ( 20) 00:07:51.185 7813.908 - 7864.320: 90.8683% ( 18) 00:07:51.185 7864.320 - 7914.732: 90.9332% ( 12) 00:07:51.185 7914.732 - 7965.145: 91.0413% ( 20) 00:07:51.185 7965.145 - 8015.557: 91.1548% ( 21) 00:07:51.185 8015.557 - 8065.969: 91.2792% ( 23) 00:07:51.185 8065.969 - 8116.382: 91.4360% ( 29) 00:07:51.185 8116.382 - 8166.794: 91.5874% ( 28) 00:07:51.185 8166.794 - 8217.206: 91.7171% ( 24) 00:07:51.185 8217.206 - 8267.618: 91.8469% ( 24) 00:07:51.185 8267.618 - 8318.031: 91.9766% ( 24) 00:07:51.185 8318.031 - 8368.443: 92.0956% ( 22) 00:07:51.185 8368.443 - 8418.855: 92.2091% ( 21) 00:07:51.185 8418.855 - 8469.268: 92.3119% ( 19) 00:07:51.185 8469.268 - 8519.680: 92.4146% ( 19) 00:07:51.185 8519.680 - 8570.092: 92.5281% ( 21) 00:07:51.185 8570.092 - 8620.505: 92.6362% ( 20) 00:07:51.185 8620.505 - 8670.917: 92.7444% ( 20) 00:07:51.185 8670.917 - 8721.329: 92.8633% ( 22) 00:07:51.185 8721.329 - 8771.742: 92.9552% ( 17) 00:07:51.185 8771.742 - 8822.154: 93.0309% ( 14) 00:07:51.185 8822.154 - 8872.566: 93.1553% ( 23) 00:07:51.185 8872.566 - 8922.978: 93.2256% ( 13) 00:07:51.185 8922.978 - 8973.391: 93.2958% ( 13) 00:07:51.185 8973.391 - 9023.803: 93.3769% ( 15) 00:07:51.185 9023.803 - 9074.215: 93.4580% ( 15) 00:07:51.185 9074.215 - 9124.628: 93.5391% ( 15) 00:07:51.185 9124.628 - 9175.040: 93.6473% ( 20) 00:07:51.185 9175.040 - 9225.452: 93.7608% ( 21) 00:07:51.185 9225.452 - 9275.865: 93.8581% ( 18) 00:07:51.185 9275.865 - 9326.277: 93.9500% ( 17) 00:07:51.185 9326.277 - 9376.689: 94.0420% ( 17) 00:07:51.185 9376.689 - 9427.102: 94.1393% ( 18) 00:07:51.185 9427.102 - 9477.514: 94.2366% ( 18) 00:07:51.185 9477.514 - 9527.926: 94.3501% ( 21) 00:07:51.185 9527.926 - 9578.338: 94.4366% ( 16) 00:07:51.185 9578.338 - 9628.751: 94.5502% ( 21) 00:07:51.185 9628.751 - 9679.163: 94.6637% ( 21) 00:07:51.185 9679.163 - 9729.575: 94.7394% ( 14) 00:07:51.185 9729.575 - 9779.988: 94.8583% ( 22) 00:07:51.185 9779.988 - 9830.400: 94.9665% ( 20) 00:07:51.185 9830.400 - 9880.812: 95.0692% ( 19) 00:07:51.185 9880.812 - 9931.225: 95.1881% ( 22) 00:07:51.185 9931.225 - 9981.637: 95.2963% ( 20) 00:07:51.185 9981.637 - 10032.049: 95.4369% ( 26) 00:07:51.185 10032.049 - 10082.462: 95.5125% ( 14) 00:07:51.185 10082.462 - 10132.874: 95.5990% ( 16) 00:07:51.185 10132.874 - 10183.286: 95.6910% ( 17) 00:07:51.185 10183.286 - 10233.698: 95.7937% ( 19) 00:07:51.186 10233.698 - 10284.111: 95.8694% ( 14) 00:07:51.186 10284.111 - 10334.523: 95.9667% ( 18) 00:07:51.186 10334.523 - 10384.935: 96.0640% ( 18) 00:07:51.186 10384.935 - 10435.348: 96.1667% ( 19) 00:07:51.186 10435.348 - 10485.760: 96.2478% ( 15) 00:07:51.186 10485.760 - 10536.172: 96.3019% ( 10) 00:07:51.186 10536.172 - 10586.585: 96.3668% ( 12) 00:07:51.186 10586.585 - 10636.997: 96.4425% ( 14) 00:07:51.186 10636.997 - 10687.409: 96.4965% ( 10) 00:07:51.186 10687.409 - 10737.822: 96.5398% ( 8) 00:07:51.186 10737.822 - 10788.234: 96.6047% ( 12) 00:07:51.186 10788.234 - 10838.646: 96.6533% ( 9) 00:07:51.186 10838.646 - 10889.058: 96.7128% ( 11) 00:07:51.186 10889.058 - 10939.471: 96.7993% ( 16) 00:07:51.186 10939.471 - 10989.883: 96.9128% ( 21) 00:07:51.186 10989.883 - 11040.295: 97.0048% ( 17) 00:07:51.186 11040.295 - 11090.708: 97.0859% ( 15) 00:07:51.186 11090.708 - 11141.120: 97.1778% ( 17) 00:07:51.186 11141.120 - 11191.532: 97.2697% ( 17) 00:07:51.186 11191.532 - 11241.945: 97.3670% ( 18) 00:07:51.186 11241.945 - 11292.357: 97.4535% ( 16) 00:07:51.186 11292.357 - 11342.769: 97.5508% ( 18) 00:07:51.186 11342.769 - 11393.182: 97.6319% ( 15) 00:07:51.186 11393.182 - 11443.594: 97.7076% ( 14) 00:07:51.186 11443.594 - 11494.006: 97.7725% ( 12) 00:07:51.186 11494.006 - 11544.418: 97.8212% ( 9) 00:07:51.186 11544.418 - 11594.831: 97.8914% ( 13) 00:07:51.186 11594.831 - 11645.243: 97.9239% ( 6) 00:07:51.186 11645.243 - 11695.655: 97.9725% ( 9) 00:07:51.186 11695.655 - 11746.068: 98.0374% ( 12) 00:07:51.186 11746.068 - 11796.480: 98.0807% ( 8) 00:07:51.186 11796.480 - 11846.892: 98.1239% ( 8) 00:07:51.186 11846.892 - 11897.305: 98.1672% ( 8) 00:07:51.186 11897.305 - 11947.717: 98.2158% ( 9) 00:07:51.186 11947.717 - 11998.129: 98.2591% ( 8) 00:07:51.186 11998.129 - 12048.542: 98.3023% ( 8) 00:07:51.186 12048.542 - 12098.954: 98.3510% ( 9) 00:07:51.186 12098.954 - 12149.366: 98.3997% ( 9) 00:07:51.186 12149.366 - 12199.778: 98.4699% ( 13) 00:07:51.186 12199.778 - 12250.191: 98.5186% ( 9) 00:07:51.186 12250.191 - 12300.603: 98.5564% ( 7) 00:07:51.186 12300.603 - 12351.015: 98.5943% ( 7) 00:07:51.186 12351.015 - 12401.428: 98.6321% ( 7) 00:07:51.186 12401.428 - 12451.840: 98.6646% ( 6) 00:07:51.186 12451.840 - 12502.252: 98.7024% ( 7) 00:07:51.186 12502.252 - 12552.665: 98.7349% ( 6) 00:07:51.186 12552.665 - 12603.077: 98.7727% ( 7) 00:07:51.186 12603.077 - 12653.489: 98.7997% ( 5) 00:07:51.186 12653.489 - 12703.902: 98.8160% ( 3) 00:07:51.186 12703.902 - 12754.314: 98.8376% ( 4) 00:07:51.186 12754.314 - 12804.726: 98.8538% ( 3) 00:07:51.186 12804.726 - 12855.138: 98.8700% ( 3) 00:07:51.186 12855.138 - 12905.551: 98.8917% ( 4) 00:07:51.186 12905.551 - 13006.375: 98.9241% ( 6) 00:07:51.186 13006.375 - 13107.200: 98.9619% ( 7) 00:07:51.186 13611.323 - 13712.148: 98.9782% ( 3) 00:07:51.186 13712.148 - 13812.972: 99.0106% ( 6) 00:07:51.186 13812.972 - 13913.797: 99.0484% ( 7) 00:07:51.186 13913.797 - 14014.622: 99.0809% ( 6) 00:07:51.186 14014.622 - 14115.446: 99.1187% ( 7) 00:07:51.186 14115.446 - 14216.271: 99.1566% ( 7) 00:07:51.186 14216.271 - 14317.095: 99.1890% ( 6) 00:07:51.186 14317.095 - 14417.920: 99.2269% ( 7) 00:07:51.186 14417.920 - 14518.745: 99.2647% ( 7) 00:07:51.186 14518.745 - 14619.569: 99.2971% ( 6) 00:07:51.186 14619.569 - 14720.394: 99.3080% ( 2) 00:07:51.186 21677.292 - 21778.117: 99.3242% ( 3) 00:07:51.186 21778.117 - 21878.942: 99.3458% ( 4) 00:07:51.186 21878.942 - 21979.766: 99.3674% ( 4) 00:07:51.186 21979.766 - 22080.591: 99.3891% ( 4) 00:07:51.186 22080.591 - 22181.415: 99.4107% ( 4) 00:07:51.186 22181.415 - 22282.240: 99.4377% ( 5) 00:07:51.186 22282.240 - 22383.065: 99.4539% ( 3) 00:07:51.186 22383.065 - 22483.889: 99.4810% ( 5) 00:07:51.186 22483.889 - 22584.714: 99.5026% ( 4) 00:07:51.186 22584.714 - 22685.538: 99.5242% ( 4) 00:07:51.186 22685.538 - 22786.363: 99.5458% ( 4) 00:07:51.186 22786.363 - 22887.188: 99.5675% ( 4) 00:07:51.186 22887.188 - 22988.012: 99.5945% ( 5) 00:07:51.186 22988.012 - 23088.837: 99.6161% ( 4) 00:07:51.186 23088.837 - 23189.662: 99.6324% ( 3) 00:07:51.186 23189.662 - 23290.486: 99.6540% ( 4) 00:07:51.186 26214.400 - 26416.049: 99.6864% ( 6) 00:07:51.186 26416.049 - 26617.698: 99.7351% ( 9) 00:07:51.186 26617.698 - 26819.348: 99.7783% ( 8) 00:07:51.186 26819.348 - 27020.997: 99.8270% ( 9) 00:07:51.186 27020.997 - 27222.646: 99.8702% ( 8) 00:07:51.186 27222.646 - 27424.295: 99.9189% ( 9) 00:07:51.186 27424.295 - 27625.945: 99.9622% ( 8) 00:07:51.186 27625.945 - 27827.594: 100.0000% ( 7) 00:07:51.186 00:07:51.186 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:51.186 ============================================================================== 00:07:51.186 Range in us Cumulative IO count 00:07:51.186 5595.766 - 5620.972: 0.0216% ( 4) 00:07:51.186 5620.972 - 5646.178: 0.0541% ( 6) 00:07:51.186 5646.178 - 5671.385: 0.0919% ( 7) 00:07:51.186 5671.385 - 5696.591: 0.2595% ( 31) 00:07:51.186 5696.591 - 5721.797: 0.5893% ( 61) 00:07:51.186 5721.797 - 5747.003: 0.9786% ( 72) 00:07:51.186 5747.003 - 5772.209: 1.3354% ( 66) 00:07:51.186 5772.209 - 5797.415: 1.7842% ( 83) 00:07:51.186 5797.415 - 5822.622: 2.1518% ( 68) 00:07:51.186 5822.622 - 5847.828: 2.4762% ( 60) 00:07:51.186 5847.828 - 5873.034: 2.8655% ( 72) 00:07:51.186 5873.034 - 5898.240: 3.3034% ( 81) 00:07:51.186 5898.240 - 5923.446: 3.8495% ( 101) 00:07:51.186 5923.446 - 5948.652: 4.4604% ( 113) 00:07:51.186 5948.652 - 5973.858: 5.2606% ( 148) 00:07:51.186 5973.858 - 5999.065: 6.3744% ( 206) 00:07:51.186 5999.065 - 6024.271: 7.5476% ( 217) 00:07:51.186 6024.271 - 6049.477: 9.2506% ( 315) 00:07:51.186 6049.477 - 6074.683: 10.8077% ( 288) 00:07:51.186 6074.683 - 6099.889: 12.6838% ( 347) 00:07:51.186 6099.889 - 6125.095: 14.7978% ( 391) 00:07:51.186 6125.095 - 6150.302: 16.9280% ( 394) 00:07:51.186 6150.302 - 6175.508: 18.9500% ( 374) 00:07:51.186 6175.508 - 6200.714: 20.9883% ( 377) 00:07:51.186 6200.714 - 6225.920: 23.1347% ( 397) 00:07:51.186 6225.920 - 6251.126: 25.3839% ( 416) 00:07:51.186 6251.126 - 6276.332: 27.9250% ( 470) 00:07:51.186 6276.332 - 6301.538: 30.3147% ( 442) 00:07:51.186 6301.538 - 6326.745: 32.7422% ( 449) 00:07:51.186 6326.745 - 6351.951: 35.2184% ( 458) 00:07:51.186 6351.951 - 6377.157: 37.6352% ( 447) 00:07:51.186 6377.157 - 6402.363: 40.2303% ( 480) 00:07:51.186 6402.363 - 6427.569: 42.6308% ( 444) 00:07:51.186 6427.569 - 6452.775: 45.0368% ( 445) 00:07:51.186 6452.775 - 6503.188: 49.6918% ( 861) 00:07:51.186 6503.188 - 6553.600: 54.2658% ( 846) 00:07:51.186 6553.600 - 6604.012: 58.7262% ( 825) 00:07:51.186 6604.012 - 6654.425: 63.2515% ( 837) 00:07:51.186 6654.425 - 6704.837: 67.6633% ( 816) 00:07:51.186 6704.837 - 6755.249: 71.8155% ( 768) 00:07:51.186 6755.249 - 6805.662: 75.5623% ( 693) 00:07:51.186 6805.662 - 6856.074: 78.7359% ( 587) 00:07:51.186 6856.074 - 6906.486: 81.2987% ( 474) 00:07:51.186 6906.486 - 6956.898: 83.1639% ( 345) 00:07:51.186 6956.898 - 7007.311: 84.6778% ( 280) 00:07:51.186 7007.311 - 7057.723: 85.7483% ( 198) 00:07:51.186 7057.723 - 7108.135: 86.4673% ( 133) 00:07:51.186 7108.135 - 7158.548: 86.9269% ( 85) 00:07:51.186 7158.548 - 7208.960: 87.2783% ( 65) 00:07:51.186 7208.960 - 7259.372: 87.6244% ( 64) 00:07:51.186 7259.372 - 7309.785: 87.9271% ( 56) 00:07:51.186 7309.785 - 7360.197: 88.2137% ( 53) 00:07:51.186 7360.197 - 7410.609: 88.4462% ( 43) 00:07:51.186 7410.609 - 7461.022: 88.6732% ( 42) 00:07:51.186 7461.022 - 7511.434: 88.9165% ( 45) 00:07:51.186 7511.434 - 7561.846: 89.1652% ( 46) 00:07:51.186 7561.846 - 7612.258: 89.3815% ( 40) 00:07:51.186 7612.258 - 7662.671: 89.6248% ( 45) 00:07:51.186 7662.671 - 7713.083: 89.8573% ( 43) 00:07:51.186 7713.083 - 7763.495: 90.0573% ( 37) 00:07:51.186 7763.495 - 7813.908: 90.2898% ( 43) 00:07:51.186 7813.908 - 7864.320: 90.4952% ( 38) 00:07:51.186 7864.320 - 7914.732: 90.7169% ( 41) 00:07:51.186 7914.732 - 7965.145: 90.9061% ( 35) 00:07:51.186 7965.145 - 8015.557: 91.1008% ( 36) 00:07:51.186 8015.557 - 8065.969: 91.2846% ( 34) 00:07:51.186 8065.969 - 8116.382: 91.4306% ( 27) 00:07:51.186 8116.382 - 8166.794: 91.5820% ( 28) 00:07:51.186 8166.794 - 8217.206: 91.7117% ( 24) 00:07:51.186 8217.206 - 8267.618: 91.8415% ( 24) 00:07:51.186 8267.618 - 8318.031: 91.9550% ( 21) 00:07:51.186 8318.031 - 8368.443: 92.0740% ( 22) 00:07:51.186 8368.443 - 8418.855: 92.2199% ( 27) 00:07:51.187 8418.855 - 8469.268: 92.3335% ( 21) 00:07:51.187 8469.268 - 8519.680: 92.4200% ( 16) 00:07:51.187 8519.680 - 8570.092: 92.5173% ( 18) 00:07:51.187 8570.092 - 8620.505: 92.6038% ( 16) 00:07:51.187 8620.505 - 8670.917: 92.6795% ( 14) 00:07:51.187 8670.917 - 8721.329: 92.7714% ( 17) 00:07:51.187 8721.329 - 8771.742: 92.8849% ( 21) 00:07:51.187 8771.742 - 8822.154: 93.0039% ( 22) 00:07:51.187 8822.154 - 8872.566: 93.1066% ( 19) 00:07:51.187 8872.566 - 8922.978: 93.2147% ( 20) 00:07:51.187 8922.978 - 8973.391: 93.3067% ( 17) 00:07:51.187 8973.391 - 9023.803: 93.3878% ( 15) 00:07:51.187 9023.803 - 9074.215: 93.4635% ( 14) 00:07:51.187 9074.215 - 9124.628: 93.5554% ( 17) 00:07:51.187 9124.628 - 9175.040: 93.6527% ( 18) 00:07:51.187 9175.040 - 9225.452: 93.7554% ( 19) 00:07:51.187 9225.452 - 9275.865: 93.8689% ( 21) 00:07:51.187 9275.865 - 9326.277: 93.9933% ( 23) 00:07:51.187 9326.277 - 9376.689: 94.1176% ( 23) 00:07:51.187 9376.689 - 9427.102: 94.2744% ( 29) 00:07:51.187 9427.102 - 9477.514: 94.4312% ( 29) 00:07:51.187 9477.514 - 9527.926: 94.5772% ( 27) 00:07:51.187 9527.926 - 9578.338: 94.7286% ( 28) 00:07:51.187 9578.338 - 9628.751: 94.8746% ( 27) 00:07:51.187 9628.751 - 9679.163: 95.0205% ( 27) 00:07:51.187 9679.163 - 9729.575: 95.1395% ( 22) 00:07:51.187 9729.575 - 9779.988: 95.2747% ( 25) 00:07:51.187 9779.988 - 9830.400: 95.3828% ( 20) 00:07:51.187 9830.400 - 9880.812: 95.5288% ( 27) 00:07:51.187 9880.812 - 9931.225: 95.6477% ( 22) 00:07:51.187 9931.225 - 9981.637: 95.7558% ( 20) 00:07:51.187 9981.637 - 10032.049: 95.8694% ( 21) 00:07:51.187 10032.049 - 10082.462: 95.9667% ( 18) 00:07:51.187 10082.462 - 10132.874: 96.0316% ( 12) 00:07:51.187 10132.874 - 10183.286: 96.0965% ( 12) 00:07:51.187 10183.286 - 10233.698: 96.1451% ( 9) 00:07:51.187 10233.698 - 10284.111: 96.1830% ( 7) 00:07:51.187 10284.111 - 10334.523: 96.2154% ( 6) 00:07:51.187 10334.523 - 10384.935: 96.2478% ( 6) 00:07:51.187 10384.935 - 10435.348: 96.3019% ( 10) 00:07:51.187 10435.348 - 10485.760: 96.3614% ( 11) 00:07:51.187 10485.760 - 10536.172: 96.3992% ( 7) 00:07:51.187 10536.172 - 10586.585: 96.4425% ( 8) 00:07:51.187 10586.585 - 10636.997: 96.4911% ( 9) 00:07:51.187 10636.997 - 10687.409: 96.5236% ( 6) 00:07:51.187 10687.409 - 10737.822: 96.5668% ( 8) 00:07:51.187 10737.822 - 10788.234: 96.5993% ( 6) 00:07:51.187 10788.234 - 10838.646: 96.6371% ( 7) 00:07:51.187 10838.646 - 10889.058: 96.6696% ( 6) 00:07:51.187 10889.058 - 10939.471: 96.7020% ( 6) 00:07:51.187 10939.471 - 10989.883: 96.7561% ( 10) 00:07:51.187 10989.883 - 11040.295: 96.8372% ( 15) 00:07:51.187 11040.295 - 11090.708: 96.8804% ( 8) 00:07:51.187 11090.708 - 11141.120: 96.9453% ( 12) 00:07:51.187 11141.120 - 11191.532: 96.9939% ( 9) 00:07:51.187 11191.532 - 11241.945: 97.0967% ( 19) 00:07:51.187 11241.945 - 11292.357: 97.1670% ( 13) 00:07:51.187 11292.357 - 11342.769: 97.2156% ( 9) 00:07:51.187 11342.769 - 11393.182: 97.2859% ( 13) 00:07:51.187 11393.182 - 11443.594: 97.3562% ( 13) 00:07:51.187 11443.594 - 11494.006: 97.4643% ( 20) 00:07:51.187 11494.006 - 11544.418: 97.5562% ( 17) 00:07:51.187 11544.418 - 11594.831: 97.6535% ( 18) 00:07:51.187 11594.831 - 11645.243: 97.7563% ( 19) 00:07:51.187 11645.243 - 11695.655: 97.8644% ( 20) 00:07:51.187 11695.655 - 11746.068: 97.9942% ( 24) 00:07:51.187 11746.068 - 11796.480: 98.1023% ( 20) 00:07:51.187 11796.480 - 11846.892: 98.1726% ( 13) 00:07:51.187 11846.892 - 11897.305: 98.2537% ( 15) 00:07:51.187 11897.305 - 11947.717: 98.3402% ( 16) 00:07:51.187 11947.717 - 11998.129: 98.4105% ( 13) 00:07:51.187 11998.129 - 12048.542: 98.4645% ( 10) 00:07:51.187 12048.542 - 12098.954: 98.5132% ( 9) 00:07:51.187 12098.954 - 12149.366: 98.5673% ( 10) 00:07:51.187 12149.366 - 12199.778: 98.6159% ( 9) 00:07:51.187 12199.778 - 12250.191: 98.6646% ( 9) 00:07:51.187 12250.191 - 12300.603: 98.7132% ( 9) 00:07:51.187 12300.603 - 12351.015: 98.7673% ( 10) 00:07:51.187 12351.015 - 12401.428: 98.7943% ( 5) 00:07:51.187 12401.428 - 12451.840: 98.8268% ( 6) 00:07:51.187 12451.840 - 12502.252: 98.8592% ( 6) 00:07:51.187 12502.252 - 12552.665: 98.8862% ( 5) 00:07:51.187 12552.665 - 12603.077: 98.9187% ( 6) 00:07:51.187 12603.077 - 12653.489: 98.9295% ( 2) 00:07:51.187 12653.489 - 12703.902: 98.9403% ( 2) 00:07:51.187 12703.902 - 12754.314: 98.9565% ( 3) 00:07:51.187 12754.314 - 12804.726: 98.9619% ( 1) 00:07:51.187 13006.375 - 13107.200: 98.9836% ( 4) 00:07:51.187 13107.200 - 13208.025: 99.0268% ( 8) 00:07:51.187 13208.025 - 13308.849: 99.0593% ( 6) 00:07:51.187 13308.849 - 13409.674: 99.0917% ( 6) 00:07:51.187 13409.674 - 13510.498: 99.1295% ( 7) 00:07:51.187 13510.498 - 13611.323: 99.1674% ( 7) 00:07:51.187 13611.323 - 13712.148: 99.2052% ( 7) 00:07:51.187 13712.148 - 13812.972: 99.2377% ( 6) 00:07:51.187 13812.972 - 13913.797: 99.2755% ( 7) 00:07:51.187 13913.797 - 14014.622: 99.3080% ( 6) 00:07:51.187 19862.449 - 19963.274: 99.3242% ( 3) 00:07:51.187 19963.274 - 20064.098: 99.3458% ( 4) 00:07:51.187 20064.098 - 20164.923: 99.3674% ( 4) 00:07:51.187 20164.923 - 20265.748: 99.3891% ( 4) 00:07:51.187 20265.748 - 20366.572: 99.4107% ( 4) 00:07:51.187 20366.572 - 20467.397: 99.4269% ( 3) 00:07:51.187 20467.397 - 20568.222: 99.4539% ( 5) 00:07:51.187 20568.222 - 20669.046: 99.4756% ( 4) 00:07:51.187 20669.046 - 20769.871: 99.4972% ( 4) 00:07:51.187 20769.871 - 20870.695: 99.5188% ( 4) 00:07:51.187 20870.695 - 20971.520: 99.5458% ( 5) 00:07:51.187 20971.520 - 21072.345: 99.5675% ( 4) 00:07:51.187 21072.345 - 21173.169: 99.5891% ( 4) 00:07:51.187 21173.169 - 21273.994: 99.6161% ( 5) 00:07:51.187 21273.994 - 21374.818: 99.6378% ( 4) 00:07:51.187 21374.818 - 21475.643: 99.6540% ( 3) 00:07:51.187 24399.557 - 24500.382: 99.6594% ( 1) 00:07:51.187 24500.382 - 24601.206: 99.6810% ( 4) 00:07:51.187 24601.206 - 24702.031: 99.7080% ( 5) 00:07:51.187 24702.031 - 24802.855: 99.7243% ( 3) 00:07:51.187 24802.855 - 24903.680: 99.7459% ( 4) 00:07:51.187 24903.680 - 25004.505: 99.7675% ( 4) 00:07:51.187 25004.505 - 25105.329: 99.7946% ( 5) 00:07:51.187 25105.329 - 25206.154: 99.8162% ( 4) 00:07:51.187 25206.154 - 25306.978: 99.8378% ( 4) 00:07:51.187 25306.978 - 25407.803: 99.8594% ( 4) 00:07:51.187 25407.803 - 25508.628: 99.8865% ( 5) 00:07:51.187 25508.628 - 25609.452: 99.9027% ( 3) 00:07:51.187 25609.452 - 25710.277: 99.9297% ( 5) 00:07:51.187 25710.277 - 25811.102: 99.9513% ( 4) 00:07:51.187 25811.102 - 26012.751: 100.0000% ( 9) 00:07:51.187 00:07:51.187 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:51.187 ============================================================================== 00:07:51.187 Range in us Cumulative IO count 00:07:51.187 5646.178 - 5671.385: 0.0754% ( 14) 00:07:51.187 5671.385 - 5696.591: 0.3610% ( 53) 00:07:51.187 5696.591 - 5721.797: 0.7381% ( 70) 00:07:51.187 5721.797 - 5747.003: 1.0938% ( 66) 00:07:51.187 5747.003 - 5772.209: 1.4925% ( 74) 00:07:51.187 5772.209 - 5797.415: 1.7942% ( 56) 00:07:51.187 5797.415 - 5822.622: 2.1013% ( 57) 00:07:51.187 5822.622 - 5847.828: 2.4246% ( 60) 00:07:51.187 5847.828 - 5873.034: 2.8125% ( 72) 00:07:51.187 5873.034 - 5898.240: 3.3190% ( 94) 00:07:51.187 5898.240 - 5923.446: 3.8793% ( 104) 00:07:51.187 5923.446 - 5948.652: 4.7144% ( 155) 00:07:51.187 5948.652 - 5973.858: 5.5226% ( 150) 00:07:51.187 5973.858 - 5999.065: 6.4763% ( 177) 00:07:51.187 5999.065 - 6024.271: 7.8017% ( 246) 00:07:51.187 6024.271 - 6049.477: 9.2565% ( 270) 00:07:51.187 6049.477 - 6074.683: 10.8190% ( 290) 00:07:51.187 6074.683 - 6099.889: 12.9095% ( 388) 00:07:51.187 6099.889 - 6125.095: 14.9192% ( 373) 00:07:51.187 6125.095 - 6150.302: 17.0097% ( 388) 00:07:51.187 6150.302 - 6175.508: 18.9655% ( 363) 00:07:51.187 6175.508 - 6200.714: 21.1369% ( 403) 00:07:51.187 6200.714 - 6225.920: 23.3675% ( 414) 00:07:51.187 6225.920 - 6251.126: 25.7866% ( 449) 00:07:51.187 6251.126 - 6276.332: 28.1843% ( 445) 00:07:51.187 6276.332 - 6301.538: 30.5065% ( 431) 00:07:51.187 6301.538 - 6326.745: 32.8179% ( 429) 00:07:51.187 6326.745 - 6351.951: 35.1347% ( 430) 00:07:51.187 6351.951 - 6377.157: 37.5647% ( 451) 00:07:51.187 6377.157 - 6402.363: 40.1401% ( 478) 00:07:51.187 6402.363 - 6427.569: 42.6185% ( 460) 00:07:51.187 6427.569 - 6452.775: 45.0700% ( 455) 00:07:51.187 6452.775 - 6503.188: 49.7845% ( 875) 00:07:51.187 6503.188 - 6553.600: 54.3588% ( 849) 00:07:51.187 6553.600 - 6604.012: 58.9062% ( 844) 00:07:51.187 6604.012 - 6654.425: 63.3459% ( 824) 00:07:51.187 6654.425 - 6704.837: 67.6293% ( 795) 00:07:51.187 6704.837 - 6755.249: 71.7457% ( 764) 00:07:51.187 6755.249 - 6805.662: 75.3879% ( 676) 00:07:51.187 6805.662 - 6856.074: 78.5075% ( 579) 00:07:51.187 6856.074 - 6906.486: 81.0237% ( 467) 00:07:51.187 6906.486 - 6956.898: 82.9041% ( 349) 00:07:51.187 6956.898 - 7007.311: 84.4127% ( 280) 00:07:51.187 7007.311 - 7057.723: 85.5011% ( 202) 00:07:51.187 7057.723 - 7108.135: 86.2284% ( 135) 00:07:51.187 7108.135 - 7158.548: 86.7619% ( 99) 00:07:51.187 7158.548 - 7208.960: 87.1606% ( 74) 00:07:51.187 7208.960 - 7259.372: 87.4461% ( 53) 00:07:51.187 7259.372 - 7309.785: 87.7317% ( 53) 00:07:51.187 7309.785 - 7360.197: 88.0011% ( 50) 00:07:51.187 7360.197 - 7410.609: 88.2435% ( 45) 00:07:51.187 7410.609 - 7461.022: 88.5183% ( 51) 00:07:51.187 7461.022 - 7511.434: 88.7608% ( 45) 00:07:51.187 7511.434 - 7561.846: 88.9925% ( 43) 00:07:51.187 7561.846 - 7612.258: 89.2834% ( 54) 00:07:51.187 7612.258 - 7662.671: 89.4774% ( 36) 00:07:51.187 7662.671 - 7713.083: 89.6552% ( 33) 00:07:51.187 7713.083 - 7763.495: 89.8384% ( 34) 00:07:51.187 7763.495 - 7813.908: 90.0323% ( 36) 00:07:51.187 7813.908 - 7864.320: 90.2155% ( 34) 00:07:51.187 7864.320 - 7914.732: 90.3933% ( 33) 00:07:51.187 7914.732 - 7965.145: 90.5442% ( 28) 00:07:51.187 7965.145 - 8015.557: 90.7220% ( 33) 00:07:51.188 8015.557 - 8065.969: 90.9106% ( 35) 00:07:51.188 8065.969 - 8116.382: 91.0991% ( 35) 00:07:51.188 8116.382 - 8166.794: 91.2716% ( 32) 00:07:51.188 8166.794 - 8217.206: 91.4655% ( 36) 00:07:51.188 8217.206 - 8267.618: 91.6325% ( 31) 00:07:51.188 8267.618 - 8318.031: 91.8050% ( 32) 00:07:51.188 8318.031 - 8368.443: 91.9720% ( 31) 00:07:51.188 8368.443 - 8418.855: 92.1013% ( 24) 00:07:51.188 8418.855 - 8469.268: 92.2360% ( 25) 00:07:51.188 8469.268 - 8519.680: 92.3707% ( 25) 00:07:51.188 8519.680 - 8570.092: 92.5269% ( 29) 00:07:51.188 8570.092 - 8620.505: 92.6509% ( 23) 00:07:51.188 8620.505 - 8670.917: 92.7963% ( 27) 00:07:51.188 8670.917 - 8721.329: 92.9418% ( 27) 00:07:51.188 8721.329 - 8771.742: 93.0819% ( 26) 00:07:51.188 8771.742 - 8822.154: 93.2112% ( 24) 00:07:51.188 8822.154 - 8872.566: 93.3351% ( 23) 00:07:51.188 8872.566 - 8922.978: 93.4698% ( 25) 00:07:51.188 8922.978 - 8973.391: 93.5722% ( 19) 00:07:51.188 8973.391 - 9023.803: 93.6907% ( 22) 00:07:51.188 9023.803 - 9074.215: 93.8093% ( 22) 00:07:51.188 9074.215 - 9124.628: 93.9278% ( 22) 00:07:51.188 9124.628 - 9175.040: 94.0517% ( 23) 00:07:51.188 9175.040 - 9225.452: 94.1918% ( 26) 00:07:51.188 9225.452 - 9275.865: 94.2942% ( 19) 00:07:51.188 9275.865 - 9326.277: 94.3804% ( 16) 00:07:51.188 9326.277 - 9376.689: 94.4774% ( 18) 00:07:51.188 9376.689 - 9427.102: 94.5690% ( 17) 00:07:51.188 9427.102 - 9477.514: 94.6713% ( 19) 00:07:51.188 9477.514 - 9527.926: 94.7791% ( 20) 00:07:51.188 9527.926 - 9578.338: 94.8976% ( 22) 00:07:51.188 9578.338 - 9628.751: 94.9838% ( 16) 00:07:51.188 9628.751 - 9679.163: 95.0700% ( 16) 00:07:51.188 9679.163 - 9729.575: 95.1562% ( 16) 00:07:51.188 9729.575 - 9779.988: 95.2155% ( 11) 00:07:51.188 9779.988 - 9830.400: 95.2748% ( 11) 00:07:51.188 9830.400 - 9880.812: 95.3718% ( 18) 00:07:51.188 9880.812 - 9931.225: 95.5011% ( 24) 00:07:51.188 9931.225 - 9981.637: 95.6250% ( 23) 00:07:51.188 9981.637 - 10032.049: 95.7112% ( 16) 00:07:51.188 10032.049 - 10082.462: 95.7974% ( 16) 00:07:51.188 10082.462 - 10132.874: 95.8998% ( 19) 00:07:51.188 10132.874 - 10183.286: 95.9968% ( 18) 00:07:51.188 10183.286 - 10233.698: 96.0560% ( 11) 00:07:51.188 10233.698 - 10284.111: 96.1207% ( 12) 00:07:51.188 10284.111 - 10334.523: 96.1907% ( 13) 00:07:51.188 10334.523 - 10384.935: 96.2446% ( 10) 00:07:51.188 10384.935 - 10435.348: 96.3147% ( 13) 00:07:51.188 10435.348 - 10485.760: 96.3847% ( 13) 00:07:51.188 10485.760 - 10536.172: 96.4440% ( 11) 00:07:51.188 10536.172 - 10586.585: 96.5086% ( 12) 00:07:51.188 10586.585 - 10636.997: 96.5733% ( 12) 00:07:51.188 10636.997 - 10687.409: 96.6325% ( 11) 00:07:51.188 10687.409 - 10737.822: 96.6756% ( 8) 00:07:51.188 10737.822 - 10788.234: 96.7188% ( 8) 00:07:51.188 10788.234 - 10838.646: 96.7672% ( 9) 00:07:51.188 10838.646 - 10889.058: 96.7942% ( 5) 00:07:51.188 10889.058 - 10939.471: 96.8265% ( 6) 00:07:51.188 10939.471 - 10989.883: 96.8642% ( 7) 00:07:51.188 10989.883 - 11040.295: 96.8912% ( 5) 00:07:51.188 11040.295 - 11090.708: 96.9181% ( 5) 00:07:51.188 11090.708 - 11141.120: 97.0366% ( 22) 00:07:51.188 11141.120 - 11191.532: 97.0959% ( 11) 00:07:51.188 11191.532 - 11241.945: 97.1498% ( 10) 00:07:51.188 11241.945 - 11292.357: 97.2360% ( 16) 00:07:51.188 11292.357 - 11342.769: 97.2953% ( 11) 00:07:51.188 11342.769 - 11393.182: 97.3599% ( 12) 00:07:51.188 11393.182 - 11443.594: 97.4246% ( 12) 00:07:51.188 11443.594 - 11494.006: 97.4946% ( 13) 00:07:51.188 11494.006 - 11544.418: 97.5700% ( 14) 00:07:51.188 11544.418 - 11594.831: 97.6347% ( 12) 00:07:51.188 11594.831 - 11645.243: 97.7209% ( 16) 00:07:51.188 11645.243 - 11695.655: 97.8017% ( 15) 00:07:51.188 11695.655 - 11746.068: 97.8879% ( 16) 00:07:51.188 11746.068 - 11796.480: 97.9849% ( 18) 00:07:51.188 11796.480 - 11846.892: 98.0765% ( 17) 00:07:51.188 11846.892 - 11897.305: 98.1250% ( 9) 00:07:51.188 11897.305 - 11947.717: 98.1843% ( 11) 00:07:51.188 11947.717 - 11998.129: 98.2328% ( 9) 00:07:51.188 11998.129 - 12048.542: 98.2812% ( 9) 00:07:51.188 12048.542 - 12098.954: 98.3405% ( 11) 00:07:51.188 12098.954 - 12149.366: 98.3998% ( 11) 00:07:51.188 12149.366 - 12199.778: 98.4860% ( 16) 00:07:51.188 12199.778 - 12250.191: 98.5453% ( 11) 00:07:51.188 12250.191 - 12300.603: 98.6045% ( 11) 00:07:51.188 12300.603 - 12351.015: 98.6638% ( 11) 00:07:51.188 12351.015 - 12401.428: 98.7123% ( 9) 00:07:51.188 12401.428 - 12451.840: 98.7392% ( 5) 00:07:51.188 12451.840 - 12502.252: 98.7608% ( 4) 00:07:51.188 12502.252 - 12552.665: 98.7662% ( 1) 00:07:51.188 12552.665 - 12603.077: 98.7823% ( 3) 00:07:51.188 12603.077 - 12653.489: 98.7931% ( 2) 00:07:51.188 12653.489 - 12703.902: 98.8039% ( 2) 00:07:51.188 12703.902 - 12754.314: 98.8200% ( 3) 00:07:51.188 12754.314 - 12804.726: 98.8308% ( 2) 00:07:51.188 12804.726 - 12855.138: 98.8416% ( 2) 00:07:51.188 12855.138 - 12905.551: 98.8524% ( 2) 00:07:51.188 12905.551 - 13006.375: 98.9062% ( 10) 00:07:51.188 13006.375 - 13107.200: 98.9655% ( 11) 00:07:51.188 13107.200 - 13208.025: 99.0356% ( 13) 00:07:51.188 13208.025 - 13308.849: 99.0948% ( 11) 00:07:51.188 13308.849 - 13409.674: 99.1325% ( 7) 00:07:51.188 13409.674 - 13510.498: 99.1703% ( 7) 00:07:51.188 13510.498 - 13611.323: 99.2080% ( 7) 00:07:51.188 13611.323 - 13712.148: 99.2403% ( 6) 00:07:51.188 13712.148 - 13812.972: 99.2780% ( 7) 00:07:51.188 13812.972 - 13913.797: 99.3103% ( 6) 00:07:51.188 14720.394 - 14821.218: 99.3157% ( 1) 00:07:51.188 14821.218 - 14922.043: 99.3373% ( 4) 00:07:51.188 14922.043 - 15022.868: 99.3588% ( 4) 00:07:51.188 15022.868 - 15123.692: 99.3858% ( 5) 00:07:51.188 15123.692 - 15224.517: 99.4073% ( 4) 00:07:51.188 15224.517 - 15325.342: 99.4289% ( 4) 00:07:51.188 15325.342 - 15426.166: 99.4504% ( 4) 00:07:51.188 15426.166 - 15526.991: 99.4720% ( 4) 00:07:51.188 15526.991 - 15627.815: 99.4935% ( 4) 00:07:51.188 15627.815 - 15728.640: 99.5151% ( 4) 00:07:51.188 15728.640 - 15829.465: 99.5420% ( 5) 00:07:51.188 15829.465 - 15930.289: 99.5636% ( 4) 00:07:51.188 15930.289 - 16031.114: 99.5851% ( 4) 00:07:51.188 16031.114 - 16131.938: 99.6067% ( 4) 00:07:51.188 16131.938 - 16232.763: 99.6336% ( 5) 00:07:51.188 16232.763 - 16333.588: 99.6552% ( 4) 00:07:51.188 19257.502 - 19358.326: 99.6606% ( 1) 00:07:51.188 19358.326 - 19459.151: 99.6821% ( 4) 00:07:51.188 19459.151 - 19559.975: 99.7091% ( 5) 00:07:51.188 19559.975 - 19660.800: 99.7306% ( 4) 00:07:51.188 19660.800 - 19761.625: 99.7522% ( 4) 00:07:51.188 19761.625 - 19862.449: 99.7737% ( 4) 00:07:51.188 19862.449 - 19963.274: 99.7953% ( 4) 00:07:51.188 19963.274 - 20064.098: 99.8168% ( 4) 00:07:51.188 20064.098 - 20164.923: 99.8384% ( 4) 00:07:51.188 20164.923 - 20265.748: 99.8599% ( 4) 00:07:51.188 20265.748 - 20366.572: 99.8815% ( 4) 00:07:51.188 20366.572 - 20467.397: 99.9030% ( 4) 00:07:51.188 20467.397 - 20568.222: 99.9300% ( 5) 00:07:51.188 20568.222 - 20669.046: 99.9515% ( 4) 00:07:51.188 20669.046 - 20769.871: 99.9731% ( 4) 00:07:51.188 20769.871 - 20870.695: 100.0000% ( 5) 00:07:51.188 00:07:51.188 14:00:52 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:52.124 Initializing NVMe Controllers 00:07:52.124 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:52.124 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:52.124 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:52.124 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:52.124 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:52.124 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:52.124 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:52.124 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:52.124 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:52.124 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:52.124 Initialization complete. Launching workers. 00:07:52.124 ======================================================== 00:07:52.124 Latency(us) 00:07:52.124 Device Information : IOPS MiB/s Average min max 00:07:52.124 PCIE (0000:00:10.0) NSID 1 from core 0: 15028.49 176.12 8529.03 6066.60 35522.47 00:07:52.124 PCIE (0000:00:11.0) NSID 1 from core 0: 15028.49 176.12 8515.53 6241.19 33418.79 00:07:52.124 PCIE (0000:00:13.0) NSID 1 from core 0: 15028.49 176.12 8502.31 6318.03 32040.96 00:07:52.124 PCIE (0000:00:12.0) NSID 1 from core 0: 15028.49 176.12 8489.19 6302.74 30260.79 00:07:52.124 PCIE (0000:00:12.0) NSID 2 from core 0: 15028.49 176.12 8476.11 6293.34 28542.74 00:07:52.124 PCIE (0000:00:12.0) NSID 3 from core 0: 15028.49 176.12 8463.02 6184.45 25549.58 00:07:52.124 ======================================================== 00:07:52.125 Total : 90170.93 1056.69 8495.86 6066.60 35522.47 00:07:52.125 00:07:52.125 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:52.125 ================================================================================= 00:07:52.125 1.00000% : 6503.188us 00:07:52.125 10.00000% : 6856.074us 00:07:52.125 25.00000% : 7360.197us 00:07:52.125 50.00000% : 8166.794us 00:07:52.125 75.00000% : 9225.452us 00:07:52.125 90.00000% : 10082.462us 00:07:52.125 95.00000% : 10737.822us 00:07:52.125 98.00000% : 12451.840us 00:07:52.125 99.00000% : 14720.394us 00:07:52.125 99.50000% : 26012.751us 00:07:52.125 99.90000% : 35288.615us 00:07:52.125 99.99000% : 35490.265us 00:07:52.125 99.99900% : 35691.914us 00:07:52.125 99.99990% : 35691.914us 00:07:52.125 99.99999% : 35691.914us 00:07:52.125 00:07:52.125 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:52.125 ================================================================================= 00:07:52.125 1.00000% : 6604.012us 00:07:52.125 10.00000% : 6906.486us 00:07:52.125 25.00000% : 7360.197us 00:07:52.125 50.00000% : 8217.206us 00:07:52.125 75.00000% : 9326.277us 00:07:52.125 90.00000% : 9931.225us 00:07:52.125 95.00000% : 10737.822us 00:07:52.125 98.00000% : 12300.603us 00:07:52.125 99.00000% : 14720.394us 00:07:52.125 99.50000% : 25609.452us 00:07:52.125 99.90000% : 33070.474us 00:07:52.125 99.99000% : 33473.772us 00:07:52.125 99.99900% : 33473.772us 00:07:52.125 99.99990% : 33473.772us 00:07:52.125 99.99999% : 33473.772us 00:07:52.125 00:07:52.125 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:52.125 ================================================================================= 00:07:52.125 1.00000% : 6604.012us 00:07:52.125 10.00000% : 6906.486us 00:07:52.125 25.00000% : 7309.785us 00:07:52.125 50.00000% : 8217.206us 00:07:52.125 75.00000% : 9326.277us 00:07:52.125 90.00000% : 9931.225us 00:07:52.125 95.00000% : 10636.997us 00:07:52.125 98.00000% : 11998.129us 00:07:52.125 99.00000% : 14518.745us 00:07:52.125 99.50000% : 23996.258us 00:07:52.125 99.90000% : 31860.578us 00:07:52.125 99.99000% : 32062.228us 00:07:52.125 99.99900% : 32062.228us 00:07:52.125 99.99990% : 32062.228us 00:07:52.125 99.99999% : 32062.228us 00:07:52.125 00:07:52.125 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:52.125 ================================================================================= 00:07:52.125 1.00000% : 6604.012us 00:07:52.125 10.00000% : 6906.486us 00:07:52.125 25.00000% : 7309.785us 00:07:52.125 50.00000% : 8267.618us 00:07:52.125 75.00000% : 9275.865us 00:07:52.125 90.00000% : 9931.225us 00:07:52.125 95.00000% : 10485.760us 00:07:52.125 98.00000% : 12048.542us 00:07:52.125 99.00000% : 14014.622us 00:07:52.125 99.50000% : 23592.960us 00:07:52.125 99.90000% : 30045.735us 00:07:52.125 99.99000% : 30247.385us 00:07:52.125 99.99900% : 30449.034us 00:07:52.125 99.99990% : 30449.034us 00:07:52.125 99.99999% : 30449.034us 00:07:52.125 00:07:52.125 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:52.125 ================================================================================= 00:07:52.125 1.00000% : 6654.425us 00:07:52.125 10.00000% : 6906.486us 00:07:52.125 25.00000% : 7309.785us 00:07:52.125 50.00000% : 8217.206us 00:07:52.125 75.00000% : 9326.277us 00:07:52.125 90.00000% : 9931.225us 00:07:52.125 95.00000% : 10536.172us 00:07:52.125 98.00000% : 12199.778us 00:07:52.125 99.00000% : 14014.622us 00:07:52.125 99.50000% : 22383.065us 00:07:52.125 99.90000% : 28230.892us 00:07:52.125 99.99000% : 28634.191us 00:07:52.125 99.99900% : 28634.191us 00:07:52.125 99.99990% : 28634.191us 00:07:52.125 99.99999% : 28634.191us 00:07:52.125 00:07:52.125 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:52.125 ================================================================================= 00:07:52.125 1.00000% : 6654.425us 00:07:52.125 10.00000% : 6906.486us 00:07:52.125 25.00000% : 7360.197us 00:07:52.125 50.00000% : 8217.206us 00:07:52.125 75.00000% : 9326.277us 00:07:52.125 90.00000% : 9880.812us 00:07:52.125 95.00000% : 10536.172us 00:07:52.125 98.00000% : 12199.778us 00:07:52.125 99.00000% : 14417.920us 00:07:52.125 99.50000% : 22483.889us 00:07:52.125 99.90000% : 25105.329us 00:07:52.125 99.99000% : 25508.628us 00:07:52.125 99.99900% : 25609.452us 00:07:52.125 99.99990% : 25609.452us 00:07:52.125 99.99999% : 25609.452us 00:07:52.125 00:07:52.125 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:52.125 ============================================================================== 00:07:52.125 Range in us Cumulative IO count 00:07:52.125 6049.477 - 6074.683: 0.0066% ( 1) 00:07:52.125 6074.683 - 6099.889: 0.0199% ( 2) 00:07:52.125 6099.889 - 6125.095: 0.0266% ( 1) 00:07:52.125 6125.095 - 6150.302: 0.0399% ( 2) 00:07:52.125 6150.302 - 6175.508: 0.0532% ( 2) 00:07:52.125 6175.508 - 6200.714: 0.0665% ( 2) 00:07:52.125 6200.714 - 6225.920: 0.0731% ( 1) 00:07:52.125 6225.920 - 6251.126: 0.1064% ( 5) 00:07:52.125 6251.126 - 6276.332: 0.1396% ( 5) 00:07:52.125 6276.332 - 6301.538: 0.1928% ( 8) 00:07:52.125 6301.538 - 6326.745: 0.2394% ( 7) 00:07:52.125 6326.745 - 6351.951: 0.3125% ( 11) 00:07:52.125 6351.951 - 6377.157: 0.3923% ( 12) 00:07:52.125 6377.157 - 6402.363: 0.4654% ( 11) 00:07:52.125 6402.363 - 6427.569: 0.5519% ( 13) 00:07:52.125 6427.569 - 6452.775: 0.6981% ( 22) 00:07:52.125 6452.775 - 6503.188: 1.1303% ( 65) 00:07:52.125 6503.188 - 6553.600: 1.9415% ( 122) 00:07:52.125 6553.600 - 6604.012: 2.9255% ( 148) 00:07:52.125 6604.012 - 6654.425: 4.0226% ( 165) 00:07:52.125 6654.425 - 6704.837: 5.3989% ( 207) 00:07:52.125 6704.837 - 6755.249: 6.9215% ( 229) 00:07:52.125 6755.249 - 6805.662: 8.6503% ( 260) 00:07:52.125 6805.662 - 6856.074: 10.5652% ( 288) 00:07:52.125 6856.074 - 6906.486: 12.1543% ( 239) 00:07:52.125 6906.486 - 6956.898: 13.7899% ( 246) 00:07:52.125 6956.898 - 7007.311: 15.3258% ( 231) 00:07:52.125 7007.311 - 7057.723: 16.7753% ( 218) 00:07:52.125 7057.723 - 7108.135: 18.1316% ( 204) 00:07:52.125 7108.135 - 7158.548: 19.5878% ( 219) 00:07:52.125 7158.548 - 7208.960: 20.9242% ( 201) 00:07:52.125 7208.960 - 7259.372: 22.2340% ( 197) 00:07:52.125 7259.372 - 7309.785: 23.7035% ( 221) 00:07:52.125 7309.785 - 7360.197: 25.3125% ( 242) 00:07:52.125 7360.197 - 7410.609: 26.6955% ( 208) 00:07:52.125 7410.609 - 7461.022: 27.6529% ( 144) 00:07:52.125 7461.022 - 7511.434: 28.6037% ( 143) 00:07:52.125 7511.434 - 7561.846: 29.7274% ( 169) 00:07:52.125 7561.846 - 7612.258: 30.6848% ( 144) 00:07:52.125 7612.258 - 7662.671: 31.7487% ( 160) 00:07:52.125 7662.671 - 7713.083: 32.6330% ( 133) 00:07:52.125 7713.083 - 7763.495: 33.5971% ( 145) 00:07:52.125 7763.495 - 7813.908: 34.6144% ( 153) 00:07:52.125 7813.908 - 7864.320: 35.7779% ( 175) 00:07:52.125 7864.320 - 7914.732: 37.3338% ( 234) 00:07:52.125 7914.732 - 7965.145: 39.5146% ( 328) 00:07:52.125 7965.145 - 8015.557: 42.2340% ( 409) 00:07:52.125 8015.557 - 8065.969: 45.3856% ( 474) 00:07:52.125 8065.969 - 8116.382: 48.3577% ( 447) 00:07:52.125 8116.382 - 8166.794: 51.0638% ( 407) 00:07:52.125 8166.794 - 8217.206: 53.4375% ( 357) 00:07:52.125 8217.206 - 8267.618: 55.7912% ( 354) 00:07:52.125 8267.618 - 8318.031: 57.6396% ( 278) 00:07:52.125 8318.031 - 8368.443: 59.1290% ( 224) 00:07:52.125 8368.443 - 8418.855: 60.6250% ( 225) 00:07:52.125 8418.855 - 8469.268: 61.9082% ( 193) 00:07:52.125 8469.268 - 8519.680: 63.1782% ( 191) 00:07:52.125 8519.680 - 8570.092: 64.4415% ( 190) 00:07:52.125 8570.092 - 8620.505: 65.7247% ( 193) 00:07:52.125 8620.505 - 8670.917: 66.8551% ( 170) 00:07:52.125 8670.917 - 8721.329: 67.9388% ( 163) 00:07:52.125 8721.329 - 8771.742: 68.9096% ( 146) 00:07:52.125 8771.742 - 8822.154: 69.8072% ( 135) 00:07:52.125 8822.154 - 8872.566: 70.5585% ( 113) 00:07:52.125 8872.566 - 8922.978: 71.5359% ( 147) 00:07:52.125 8922.978 - 8973.391: 72.1277% ( 89) 00:07:52.125 8973.391 - 9023.803: 72.7194% ( 89) 00:07:52.125 9023.803 - 9074.215: 73.2846% ( 85) 00:07:52.125 9074.215 - 9124.628: 73.7500% ( 70) 00:07:52.125 9124.628 - 9175.040: 74.3019% ( 83) 00:07:52.125 9175.040 - 9225.452: 75.0532% ( 113) 00:07:52.125 9225.452 - 9275.865: 76.0040% ( 143) 00:07:52.125 9275.865 - 9326.277: 76.8551% ( 128) 00:07:52.125 9326.277 - 9376.689: 77.8790% ( 154) 00:07:52.125 9376.689 - 9427.102: 79.0426% ( 175) 00:07:52.125 9427.102 - 9477.514: 80.4787% ( 216) 00:07:52.125 9477.514 - 9527.926: 81.5824% ( 166) 00:07:52.125 9527.926 - 9578.338: 82.7726% ( 179) 00:07:52.125 9578.338 - 9628.751: 83.5771% ( 121) 00:07:52.125 9628.751 - 9679.163: 84.4481% ( 131) 00:07:52.125 9679.163 - 9729.575: 85.2793% ( 125) 00:07:52.125 9729.575 - 9779.988: 86.2566% ( 147) 00:07:52.125 9779.988 - 9830.400: 87.0146% ( 114) 00:07:52.125 9830.400 - 9880.812: 87.6862% ( 101) 00:07:52.125 9880.812 - 9931.225: 88.3777% ( 104) 00:07:52.125 9931.225 - 9981.637: 89.0758% ( 105) 00:07:52.125 9981.637 - 10032.049: 89.8271% ( 113) 00:07:52.126 10032.049 - 10082.462: 90.5984% ( 116) 00:07:52.126 10082.462 - 10132.874: 91.2301% ( 95) 00:07:52.126 10132.874 - 10183.286: 91.8152% ( 88) 00:07:52.126 10183.286 - 10233.698: 92.3670% ( 83) 00:07:52.126 10233.698 - 10284.111: 92.9920% ( 94) 00:07:52.126 10284.111 - 10334.523: 93.3378% ( 52) 00:07:52.126 10334.523 - 10384.935: 93.6436% ( 46) 00:07:52.126 10384.935 - 10435.348: 93.8830% ( 36) 00:07:52.126 10435.348 - 10485.760: 94.0758% ( 29) 00:07:52.126 10485.760 - 10536.172: 94.3152% ( 36) 00:07:52.126 10536.172 - 10586.585: 94.5346% ( 33) 00:07:52.126 10586.585 - 10636.997: 94.7407% ( 31) 00:07:52.126 10636.997 - 10687.409: 94.8803% ( 21) 00:07:52.126 10687.409 - 10737.822: 95.0997% ( 33) 00:07:52.126 10737.822 - 10788.234: 95.2460% ( 22) 00:07:52.126 10788.234 - 10838.646: 95.4654% ( 33) 00:07:52.126 10838.646 - 10889.058: 95.5984% ( 20) 00:07:52.126 10889.058 - 10939.471: 95.7447% ( 22) 00:07:52.126 10939.471 - 10989.883: 95.8910% ( 22) 00:07:52.126 10989.883 - 11040.295: 96.0505% ( 24) 00:07:52.126 11040.295 - 11090.708: 96.1636% ( 17) 00:07:52.126 11090.708 - 11141.120: 96.2500% ( 13) 00:07:52.126 11141.120 - 11191.532: 96.2965% ( 7) 00:07:52.126 11191.532 - 11241.945: 96.3497% ( 8) 00:07:52.126 11241.945 - 11292.357: 96.4495% ( 15) 00:07:52.126 11292.357 - 11342.769: 96.5426% ( 14) 00:07:52.126 11342.769 - 11393.182: 96.6290% ( 13) 00:07:52.126 11393.182 - 11443.594: 96.7021% ( 11) 00:07:52.126 11443.594 - 11494.006: 96.8085% ( 16) 00:07:52.126 11494.006 - 11544.418: 96.9282% ( 18) 00:07:52.126 11544.418 - 11594.831: 97.0545% ( 19) 00:07:52.126 11594.831 - 11645.243: 97.1609% ( 16) 00:07:52.126 11645.243 - 11695.655: 97.2407% ( 12) 00:07:52.126 11695.655 - 11746.068: 97.3271% ( 13) 00:07:52.126 11746.068 - 11796.480: 97.4136% ( 13) 00:07:52.126 11796.480 - 11846.892: 97.4867% ( 11) 00:07:52.126 11846.892 - 11897.305: 97.5465% ( 9) 00:07:52.126 11897.305 - 11947.717: 97.5931% ( 7) 00:07:52.126 11947.717 - 11998.129: 97.6197% ( 4) 00:07:52.126 11998.129 - 12048.542: 97.6529% ( 5) 00:07:52.126 12048.542 - 12098.954: 97.6662% ( 2) 00:07:52.126 12098.954 - 12149.366: 97.6729% ( 1) 00:07:52.126 12149.366 - 12199.778: 97.6928% ( 3) 00:07:52.126 12199.778 - 12250.191: 97.7327% ( 6) 00:07:52.126 12250.191 - 12300.603: 97.8457% ( 17) 00:07:52.126 12300.603 - 12351.015: 97.9521% ( 16) 00:07:52.126 12351.015 - 12401.428: 97.9721% ( 3) 00:07:52.126 12401.428 - 12451.840: 98.0120% ( 6) 00:07:52.126 12451.840 - 12502.252: 98.0386% ( 4) 00:07:52.126 12502.252 - 12552.665: 98.1449% ( 16) 00:07:52.126 12552.665 - 12603.077: 98.2513% ( 16) 00:07:52.126 12603.077 - 12653.489: 98.2912% ( 6) 00:07:52.126 12653.489 - 12703.902: 98.3444% ( 8) 00:07:52.126 12703.902 - 12754.314: 98.3910% ( 7) 00:07:52.126 12754.314 - 12804.726: 98.4309% ( 6) 00:07:52.126 12804.726 - 12855.138: 98.4774% ( 7) 00:07:52.126 12855.138 - 12905.551: 98.5106% ( 5) 00:07:52.126 12905.551 - 13006.375: 98.5505% ( 6) 00:07:52.126 13006.375 - 13107.200: 98.5971% ( 7) 00:07:52.126 13107.200 - 13208.025: 98.6436% ( 7) 00:07:52.126 13208.025 - 13308.849: 98.6835% ( 6) 00:07:52.126 13308.849 - 13409.674: 98.7234% ( 6) 00:07:52.126 13913.797 - 14014.622: 98.7633% ( 6) 00:07:52.126 14014.622 - 14115.446: 98.8032% ( 6) 00:07:52.126 14115.446 - 14216.271: 98.8364% ( 5) 00:07:52.126 14216.271 - 14317.095: 98.8763% ( 6) 00:07:52.126 14317.095 - 14417.920: 98.9162% ( 6) 00:07:52.126 14417.920 - 14518.745: 98.9495% ( 5) 00:07:52.126 14518.745 - 14619.569: 98.9827% ( 5) 00:07:52.126 14619.569 - 14720.394: 99.0093% ( 4) 00:07:52.126 14720.394 - 14821.218: 99.0492% ( 6) 00:07:52.126 14821.218 - 14922.043: 99.0824% ( 5) 00:07:52.126 14922.043 - 15022.868: 99.1223% ( 6) 00:07:52.126 15022.868 - 15123.692: 99.1489% ( 4) 00:07:52.126 23391.311 - 23492.135: 99.1556% ( 1) 00:07:52.126 23492.135 - 23592.960: 99.1622% ( 1) 00:07:52.126 23592.960 - 23693.785: 99.1755% ( 2) 00:07:52.126 23693.785 - 23794.609: 99.1955% ( 3) 00:07:52.126 23794.609 - 23895.434: 99.2088% ( 2) 00:07:52.126 23895.434 - 23996.258: 99.2221% ( 2) 00:07:52.126 23996.258 - 24097.083: 99.2420% ( 3) 00:07:52.126 24097.083 - 24197.908: 99.2553% ( 2) 00:07:52.126 24197.908 - 24298.732: 99.2753% ( 3) 00:07:52.126 24298.732 - 24399.557: 99.2819% ( 1) 00:07:52.126 24399.557 - 24500.382: 99.3019% ( 3) 00:07:52.126 24500.382 - 24601.206: 99.3152% ( 2) 00:07:52.126 24601.206 - 24702.031: 99.3285% ( 2) 00:07:52.126 24702.031 - 24802.855: 99.3418% ( 2) 00:07:52.126 24802.855 - 24903.680: 99.3551% ( 2) 00:07:52.126 24903.680 - 25004.505: 99.3684% ( 2) 00:07:52.126 25004.505 - 25105.329: 99.3750% ( 1) 00:07:52.126 25105.329 - 25206.154: 99.3949% ( 3) 00:07:52.126 25206.154 - 25306.978: 99.4082% ( 2) 00:07:52.126 25306.978 - 25407.803: 99.4215% ( 2) 00:07:52.126 25407.803 - 25508.628: 99.4415% ( 3) 00:07:52.126 25508.628 - 25609.452: 99.4548% ( 2) 00:07:52.126 25609.452 - 25710.277: 99.4681% ( 2) 00:07:52.126 25710.277 - 25811.102: 99.4880% ( 3) 00:07:52.126 25811.102 - 26012.751: 99.5146% ( 4) 00:07:52.126 26012.751 - 26214.400: 99.5479% ( 5) 00:07:52.126 26214.400 - 26416.049: 99.5745% ( 4) 00:07:52.126 33473.772 - 33675.422: 99.6011% ( 4) 00:07:52.126 33675.422 - 33877.071: 99.6210% ( 3) 00:07:52.126 33877.071 - 34078.720: 99.6609% ( 6) 00:07:52.126 34078.720 - 34280.369: 99.6941% ( 5) 00:07:52.126 34280.369 - 34482.018: 99.7340% ( 6) 00:07:52.126 34482.018 - 34683.668: 99.7872% ( 8) 00:07:52.126 34683.668 - 34885.317: 99.8404% ( 8) 00:07:52.126 34885.317 - 35086.966: 99.8870% ( 7) 00:07:52.126 35086.966 - 35288.615: 99.9468% ( 9) 00:07:52.126 35288.615 - 35490.265: 99.9934% ( 7) 00:07:52.126 35490.265 - 35691.914: 100.0000% ( 1) 00:07:52.126 00:07:52.126 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:52.126 ============================================================================== 00:07:52.126 Range in us Cumulative IO count 00:07:52.126 6225.920 - 6251.126: 0.0066% ( 1) 00:07:52.126 6251.126 - 6276.332: 0.0199% ( 2) 00:07:52.126 6326.745 - 6351.951: 0.0266% ( 1) 00:07:52.126 6351.951 - 6377.157: 0.0465% ( 3) 00:07:52.126 6377.157 - 6402.363: 0.0798% ( 5) 00:07:52.126 6402.363 - 6427.569: 0.1463% ( 10) 00:07:52.126 6427.569 - 6452.775: 0.1795% ( 5) 00:07:52.126 6452.775 - 6503.188: 0.3258% ( 22) 00:07:52.126 6503.188 - 6553.600: 0.8112% ( 73) 00:07:52.126 6553.600 - 6604.012: 1.4894% ( 102) 00:07:52.126 6604.012 - 6654.425: 2.1343% ( 97) 00:07:52.126 6654.425 - 6704.837: 3.1582% ( 154) 00:07:52.126 6704.837 - 6755.249: 4.9934% ( 276) 00:07:52.126 6755.249 - 6805.662: 7.0878% ( 315) 00:07:52.126 6805.662 - 6856.074: 8.9894% ( 286) 00:07:52.126 6856.074 - 6906.486: 11.2301% ( 337) 00:07:52.126 6906.486 - 6956.898: 13.2912% ( 310) 00:07:52.126 6956.898 - 7007.311: 15.3723% ( 313) 00:07:52.126 7007.311 - 7057.723: 17.5266% ( 324) 00:07:52.126 7057.723 - 7108.135: 19.2154% ( 254) 00:07:52.126 7108.135 - 7158.548: 20.5918% ( 207) 00:07:52.126 7158.548 - 7208.960: 21.7819% ( 179) 00:07:52.126 7208.960 - 7259.372: 23.0253% ( 187) 00:07:52.126 7259.372 - 7309.785: 24.1955% ( 176) 00:07:52.126 7309.785 - 7360.197: 25.1662% ( 146) 00:07:52.126 7360.197 - 7410.609: 26.2766% ( 167) 00:07:52.126 7410.609 - 7461.022: 27.2872% ( 152) 00:07:52.126 7461.022 - 7511.434: 28.7566% ( 221) 00:07:52.126 7511.434 - 7561.846: 30.3790% ( 244) 00:07:52.126 7561.846 - 7612.258: 31.4694% ( 164) 00:07:52.126 7612.258 - 7662.671: 32.1609% ( 104) 00:07:52.126 7662.671 - 7713.083: 33.0652% ( 136) 00:07:52.126 7713.083 - 7763.495: 33.7367% ( 101) 00:07:52.126 7763.495 - 7813.908: 34.3351% ( 90) 00:07:52.126 7813.908 - 7864.320: 35.0798% ( 112) 00:07:52.126 7864.320 - 7914.732: 35.8112% ( 110) 00:07:52.126 7914.732 - 7965.145: 36.4694% ( 99) 00:07:52.126 7965.145 - 8015.557: 38.0386% ( 236) 00:07:52.126 8015.557 - 8065.969: 40.6649% ( 395) 00:07:52.126 8065.969 - 8116.382: 43.8431% ( 478) 00:07:52.126 8116.382 - 8166.794: 46.4096% ( 386) 00:07:52.126 8166.794 - 8217.206: 50.9242% ( 679) 00:07:52.126 8217.206 - 8267.618: 56.3963% ( 823) 00:07:52.126 8267.618 - 8318.031: 59.2886% ( 435) 00:07:52.126 8318.031 - 8368.443: 61.7952% ( 377) 00:07:52.126 8368.443 - 8418.855: 63.9096% ( 318) 00:07:52.126 8418.855 - 8469.268: 65.6383% ( 260) 00:07:52.126 8469.268 - 8519.680: 67.4601% ( 274) 00:07:52.126 8519.680 - 8570.092: 68.3644% ( 136) 00:07:52.126 8570.092 - 8620.505: 69.1489% ( 118) 00:07:52.126 8620.505 - 8670.917: 69.6476% ( 75) 00:07:52.126 8670.917 - 8721.329: 70.0997% ( 68) 00:07:52.126 8721.329 - 8771.742: 70.5253% ( 64) 00:07:52.126 8771.742 - 8822.154: 71.0239% ( 75) 00:07:52.126 8822.154 - 8872.566: 71.5093% ( 73) 00:07:52.126 8872.566 - 8922.978: 71.9149% ( 61) 00:07:52.126 8922.978 - 8973.391: 72.4601% ( 82) 00:07:52.126 8973.391 - 9023.803: 72.8191% ( 54) 00:07:52.126 9023.803 - 9074.215: 73.0386% ( 33) 00:07:52.126 9074.215 - 9124.628: 73.2912% ( 38) 00:07:52.126 9124.628 - 9175.040: 73.6503% ( 54) 00:07:52.126 9175.040 - 9225.452: 73.9694% ( 48) 00:07:52.126 9225.452 - 9275.865: 74.4548% ( 73) 00:07:52.126 9275.865 - 9326.277: 75.1795% ( 109) 00:07:52.126 9326.277 - 9376.689: 76.0904% ( 137) 00:07:52.126 9376.689 - 9427.102: 77.0146% ( 139) 00:07:52.126 9427.102 - 9477.514: 78.3378% ( 199) 00:07:52.126 9477.514 - 9527.926: 79.8005% ( 220) 00:07:52.126 9527.926 - 9578.338: 81.1835% ( 208) 00:07:52.126 9578.338 - 9628.751: 82.6396% ( 219) 00:07:52.126 9628.751 - 9679.163: 84.1556% ( 228) 00:07:52.126 9679.163 - 9729.575: 85.5652% ( 212) 00:07:52.126 9729.575 - 9779.988: 87.1210% ( 234) 00:07:52.126 9779.988 - 9830.400: 88.4707% ( 203) 00:07:52.127 9830.400 - 9880.812: 89.2287% ( 114) 00:07:52.127 9880.812 - 9931.225: 90.1729% ( 142) 00:07:52.127 9931.225 - 9981.637: 90.9242% ( 113) 00:07:52.127 9981.637 - 10032.049: 91.5891% ( 100) 00:07:52.127 10032.049 - 10082.462: 92.0479% ( 69) 00:07:52.127 10082.462 - 10132.874: 92.5864% ( 81) 00:07:52.127 10132.874 - 10183.286: 92.9654% ( 57) 00:07:52.127 10183.286 - 10233.698: 93.2314% ( 40) 00:07:52.127 10233.698 - 10284.111: 93.4508% ( 33) 00:07:52.127 10284.111 - 10334.523: 93.6636% ( 32) 00:07:52.127 10334.523 - 10384.935: 93.8497% ( 28) 00:07:52.127 10384.935 - 10435.348: 94.0492% ( 30) 00:07:52.127 10435.348 - 10485.760: 94.2088% ( 24) 00:07:52.127 10485.760 - 10536.172: 94.4880% ( 42) 00:07:52.127 10536.172 - 10586.585: 94.6343% ( 22) 00:07:52.127 10586.585 - 10636.997: 94.7806% ( 22) 00:07:52.127 10636.997 - 10687.409: 94.9668% ( 28) 00:07:52.127 10687.409 - 10737.822: 95.1396% ( 26) 00:07:52.127 10737.822 - 10788.234: 95.2793% ( 21) 00:07:52.127 10788.234 - 10838.646: 95.4987% ( 33) 00:07:52.127 10838.646 - 10889.058: 95.6649% ( 25) 00:07:52.127 10889.058 - 10939.471: 95.8311% ( 25) 00:07:52.127 10939.471 - 10989.883: 96.0572% ( 34) 00:07:52.127 10989.883 - 11040.295: 96.2832% ( 34) 00:07:52.127 11040.295 - 11090.708: 96.3963% ( 17) 00:07:52.127 11090.708 - 11141.120: 96.4827% ( 13) 00:07:52.127 11141.120 - 11191.532: 96.5559% ( 11) 00:07:52.127 11191.532 - 11241.945: 96.6290% ( 11) 00:07:52.127 11241.945 - 11292.357: 96.7287% ( 15) 00:07:52.127 11292.357 - 11342.769: 96.8484% ( 18) 00:07:52.127 11342.769 - 11393.182: 96.9348% ( 13) 00:07:52.127 11393.182 - 11443.594: 97.0146% ( 12) 00:07:52.127 11443.594 - 11494.006: 97.0878% ( 11) 00:07:52.127 11494.006 - 11544.418: 97.1742% ( 13) 00:07:52.127 11544.418 - 11594.831: 97.2274% ( 8) 00:07:52.127 11594.831 - 11645.243: 97.3404% ( 17) 00:07:52.127 11645.243 - 11695.655: 97.4402% ( 15) 00:07:52.127 11695.655 - 11746.068: 97.5532% ( 17) 00:07:52.127 11746.068 - 11796.480: 97.6330% ( 12) 00:07:52.127 11796.480 - 11846.892: 97.6795% ( 7) 00:07:52.127 11846.892 - 11897.305: 97.7527% ( 11) 00:07:52.127 11897.305 - 11947.717: 97.7992% ( 7) 00:07:52.127 11947.717 - 11998.129: 97.8125% ( 2) 00:07:52.127 11998.129 - 12048.542: 97.8258% ( 2) 00:07:52.127 12048.542 - 12098.954: 97.8391% ( 2) 00:07:52.127 12098.954 - 12149.366: 97.8856% ( 7) 00:07:52.127 12149.366 - 12199.778: 97.9521% ( 10) 00:07:52.127 12199.778 - 12250.191: 97.9854% ( 5) 00:07:52.127 12250.191 - 12300.603: 98.0319% ( 7) 00:07:52.127 12300.603 - 12351.015: 98.0718% ( 6) 00:07:52.127 12351.015 - 12401.428: 98.1250% ( 8) 00:07:52.127 12401.428 - 12451.840: 98.1649% ( 6) 00:07:52.127 12451.840 - 12502.252: 98.1782% ( 2) 00:07:52.127 12502.252 - 12552.665: 98.1915% ( 2) 00:07:52.127 12552.665 - 12603.077: 98.2048% ( 2) 00:07:52.127 12603.077 - 12653.489: 98.2114% ( 1) 00:07:52.127 12653.489 - 12703.902: 98.2247% ( 2) 00:07:52.127 12703.902 - 12754.314: 98.2580% ( 5) 00:07:52.127 12754.314 - 12804.726: 98.3045% ( 7) 00:07:52.127 12804.726 - 12855.138: 98.3577% ( 8) 00:07:52.127 12855.138 - 12905.551: 98.4973% ( 21) 00:07:52.127 12905.551 - 13006.375: 98.6569% ( 24) 00:07:52.127 13006.375 - 13107.200: 98.6835% ( 4) 00:07:52.127 13107.200 - 13208.025: 98.7101% ( 4) 00:07:52.127 13208.025 - 13308.849: 98.7234% ( 2) 00:07:52.127 13913.797 - 14014.622: 98.7367% ( 2) 00:07:52.127 14014.622 - 14115.446: 98.7766% ( 6) 00:07:52.127 14115.446 - 14216.271: 98.8231% ( 7) 00:07:52.127 14216.271 - 14317.095: 98.8630% ( 6) 00:07:52.127 14317.095 - 14417.920: 98.9029% ( 6) 00:07:52.127 14417.920 - 14518.745: 98.9495% ( 7) 00:07:52.127 14518.745 - 14619.569: 98.9894% ( 6) 00:07:52.127 14619.569 - 14720.394: 99.0293% ( 6) 00:07:52.127 14720.394 - 14821.218: 99.0758% ( 7) 00:07:52.127 14821.218 - 14922.043: 99.1223% ( 7) 00:07:52.127 14922.043 - 15022.868: 99.1489% ( 4) 00:07:52.127 23895.434 - 23996.258: 99.1622% ( 2) 00:07:52.127 23996.258 - 24097.083: 99.1888% ( 4) 00:07:52.127 24097.083 - 24197.908: 99.2221% ( 5) 00:07:52.127 24197.908 - 24298.732: 99.2287% ( 1) 00:07:52.127 24298.732 - 24399.557: 99.2620% ( 5) 00:07:52.127 24399.557 - 24500.382: 99.2952% ( 5) 00:07:52.127 24500.382 - 24601.206: 99.3218% ( 4) 00:07:52.127 24601.206 - 24702.031: 99.3551% ( 5) 00:07:52.127 24702.031 - 24802.855: 99.3816% ( 4) 00:07:52.127 24802.855 - 24903.680: 99.4016% ( 3) 00:07:52.127 24903.680 - 25004.505: 99.4215% ( 3) 00:07:52.127 25004.505 - 25105.329: 99.4348% ( 2) 00:07:52.127 25105.329 - 25206.154: 99.4481% ( 2) 00:07:52.127 25206.154 - 25306.978: 99.4614% ( 2) 00:07:52.127 25306.978 - 25407.803: 99.4747% ( 2) 00:07:52.127 25407.803 - 25508.628: 99.4880% ( 2) 00:07:52.127 25508.628 - 25609.452: 99.5013% ( 2) 00:07:52.127 25609.452 - 25710.277: 99.5146% ( 2) 00:07:52.127 25710.277 - 25811.102: 99.5279% ( 2) 00:07:52.127 25811.102 - 26012.751: 99.5612% ( 5) 00:07:52.127 26012.751 - 26214.400: 99.5745% ( 2) 00:07:52.127 31255.631 - 31457.280: 99.6609% ( 13) 00:07:52.127 31457.280 - 31658.929: 99.8271% ( 25) 00:07:52.127 32667.175 - 32868.825: 99.8737% ( 7) 00:07:52.127 32868.825 - 33070.474: 99.9202% ( 7) 00:07:52.127 33070.474 - 33272.123: 99.9601% ( 6) 00:07:52.127 33272.123 - 33473.772: 100.0000% ( 6) 00:07:52.127 00:07:52.127 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:52.127 ============================================================================== 00:07:52.127 Range in us Cumulative IO count 00:07:52.127 6301.538 - 6326.745: 0.0066% ( 1) 00:07:52.127 6377.157 - 6402.363: 0.0266% ( 3) 00:07:52.127 6402.363 - 6427.569: 0.0598% ( 5) 00:07:52.127 6427.569 - 6452.775: 0.0997% ( 6) 00:07:52.127 6452.775 - 6503.188: 0.2926% ( 29) 00:07:52.127 6503.188 - 6553.600: 0.5519% ( 39) 00:07:52.127 6553.600 - 6604.012: 1.0505% ( 75) 00:07:52.127 6604.012 - 6654.425: 1.8750% ( 124) 00:07:52.127 6654.425 - 6704.837: 2.9189% ( 157) 00:07:52.127 6704.837 - 6755.249: 4.5412% ( 244) 00:07:52.127 6755.249 - 6805.662: 6.3564% ( 273) 00:07:52.127 6805.662 - 6856.074: 8.2314% ( 282) 00:07:52.127 6856.074 - 6906.486: 10.6981% ( 371) 00:07:52.127 6906.486 - 6956.898: 12.7128% ( 303) 00:07:52.127 6956.898 - 7007.311: 15.1330% ( 364) 00:07:52.127 7007.311 - 7057.723: 17.1077% ( 297) 00:07:52.127 7057.723 - 7108.135: 19.3418% ( 336) 00:07:52.127 7108.135 - 7158.548: 20.9309% ( 239) 00:07:52.127 7158.548 - 7208.960: 22.5532% ( 244) 00:07:52.127 7208.960 - 7259.372: 24.3883% ( 276) 00:07:52.127 7259.372 - 7309.785: 25.6649% ( 192) 00:07:52.127 7309.785 - 7360.197: 26.5625% ( 135) 00:07:52.127 7360.197 - 7410.609: 27.3404% ( 117) 00:07:52.127 7410.609 - 7461.022: 27.9987% ( 99) 00:07:52.127 7461.022 - 7511.434: 28.7301% ( 110) 00:07:52.127 7511.434 - 7561.846: 29.5811% ( 128) 00:07:52.127 7561.846 - 7612.258: 30.5253% ( 142) 00:07:52.127 7612.258 - 7662.671: 31.4628% ( 141) 00:07:52.127 7662.671 - 7713.083: 32.4734% ( 152) 00:07:52.127 7713.083 - 7763.495: 33.1715% ( 105) 00:07:52.127 7763.495 - 7813.908: 34.1290% ( 144) 00:07:52.127 7813.908 - 7864.320: 34.9468% ( 123) 00:07:52.127 7864.320 - 7914.732: 35.9242% ( 147) 00:07:52.127 7914.732 - 7965.145: 37.0412% ( 168) 00:07:52.127 7965.145 - 8015.557: 39.1356% ( 315) 00:07:52.127 8015.557 - 8065.969: 41.7221% ( 389) 00:07:52.127 8065.969 - 8116.382: 44.1556% ( 366) 00:07:52.127 8116.382 - 8166.794: 47.1742% ( 454) 00:07:52.127 8166.794 - 8217.206: 50.4455% ( 492) 00:07:52.127 8217.206 - 8267.618: 54.8803% ( 667) 00:07:52.127 8267.618 - 8318.031: 58.2580% ( 508) 00:07:52.127 8318.031 - 8368.443: 61.0904% ( 426) 00:07:52.127 8368.443 - 8418.855: 63.1383% ( 308) 00:07:52.127 8418.855 - 8469.268: 64.8404% ( 256) 00:07:52.127 8469.268 - 8519.680: 65.9774% ( 171) 00:07:52.127 8519.680 - 8570.092: 67.1011% ( 169) 00:07:52.127 8570.092 - 8620.505: 68.0918% ( 149) 00:07:52.127 8620.505 - 8670.917: 68.7832% ( 104) 00:07:52.127 8670.917 - 8721.329: 69.5811% ( 120) 00:07:52.127 8721.329 - 8771.742: 70.0665% ( 73) 00:07:52.127 8771.742 - 8822.154: 70.4189% ( 53) 00:07:52.127 8822.154 - 8872.566: 70.8311% ( 62) 00:07:52.127 8872.566 - 8922.978: 71.2699% ( 66) 00:07:52.127 8922.978 - 8973.391: 71.6755% ( 61) 00:07:52.127 8973.391 - 9023.803: 72.2141% ( 81) 00:07:52.127 9023.803 - 9074.215: 73.0186% ( 121) 00:07:52.127 9074.215 - 9124.628: 73.6104% ( 89) 00:07:52.127 9124.628 - 9175.040: 74.0027% ( 59) 00:07:52.127 9175.040 - 9225.452: 74.3418% ( 51) 00:07:52.127 9225.452 - 9275.865: 74.8072% ( 70) 00:07:52.127 9275.865 - 9326.277: 75.5120% ( 106) 00:07:52.127 9326.277 - 9376.689: 76.3364% ( 124) 00:07:52.127 9376.689 - 9427.102: 77.3604% ( 154) 00:07:52.127 9427.102 - 9477.514: 78.7101% ( 203) 00:07:52.127 9477.514 - 9527.926: 80.1662% ( 219) 00:07:52.127 9527.926 - 9578.338: 81.5160% ( 203) 00:07:52.127 9578.338 - 9628.751: 83.3245% ( 272) 00:07:52.127 9628.751 - 9679.163: 84.6410% ( 198) 00:07:52.127 9679.163 - 9729.575: 85.7580% ( 168) 00:07:52.127 9729.575 - 9779.988: 86.9481% ( 179) 00:07:52.127 9779.988 - 9830.400: 87.9122% ( 145) 00:07:52.127 9830.400 - 9880.812: 89.1622% ( 188) 00:07:52.127 9880.812 - 9931.225: 90.2194% ( 159) 00:07:52.127 9931.225 - 9981.637: 90.8976% ( 102) 00:07:52.127 9981.637 - 10032.049: 91.5625% ( 100) 00:07:52.127 10032.049 - 10082.462: 92.2407% ( 102) 00:07:52.127 10082.462 - 10132.874: 92.8258% ( 88) 00:07:52.127 10132.874 - 10183.286: 93.1981% ( 56) 00:07:52.127 10183.286 - 10233.698: 93.5771% ( 57) 00:07:52.127 10233.698 - 10284.111: 93.9694% ( 59) 00:07:52.127 10284.111 - 10334.523: 94.2620% ( 44) 00:07:52.127 10334.523 - 10384.935: 94.4614% ( 30) 00:07:52.127 10384.935 - 10435.348: 94.6277% ( 25) 00:07:52.127 10435.348 - 10485.760: 94.7939% ( 25) 00:07:52.128 10485.760 - 10536.172: 94.8936% ( 15) 00:07:52.128 10536.172 - 10586.585: 94.9734% ( 12) 00:07:52.128 10586.585 - 10636.997: 95.0465% ( 11) 00:07:52.128 10636.997 - 10687.409: 95.1463% ( 15) 00:07:52.128 10687.409 - 10737.822: 95.2460% ( 15) 00:07:52.128 10737.822 - 10788.234: 95.4056% ( 24) 00:07:52.128 10788.234 - 10838.646: 95.5386% ( 20) 00:07:52.128 10838.646 - 10889.058: 95.6981% ( 24) 00:07:52.128 10889.058 - 10939.471: 95.8910% ( 29) 00:07:52.128 10939.471 - 10989.883: 96.1237% ( 35) 00:07:52.128 10989.883 - 11040.295: 96.3364% ( 32) 00:07:52.128 11040.295 - 11090.708: 96.6356% ( 45) 00:07:52.128 11090.708 - 11141.120: 96.8152% ( 27) 00:07:52.128 11141.120 - 11191.532: 96.9215% ( 16) 00:07:52.128 11191.532 - 11241.945: 96.9947% ( 11) 00:07:52.128 11241.945 - 11292.357: 97.0678% ( 11) 00:07:52.128 11292.357 - 11342.769: 97.1543% ( 13) 00:07:52.128 11342.769 - 11393.182: 97.2540% ( 15) 00:07:52.128 11393.182 - 11443.594: 97.3338% ( 12) 00:07:52.128 11443.594 - 11494.006: 97.4069% ( 11) 00:07:52.128 11494.006 - 11544.418: 97.4535% ( 7) 00:07:52.128 11544.418 - 11594.831: 97.5399% ( 13) 00:07:52.128 11594.831 - 11645.243: 97.6529% ( 17) 00:07:52.128 11645.243 - 11695.655: 97.7660% ( 17) 00:07:52.128 11695.655 - 11746.068: 97.8258% ( 9) 00:07:52.128 11746.068 - 11796.480: 97.8923% ( 10) 00:07:52.128 11796.480 - 11846.892: 97.9322% ( 6) 00:07:52.128 11846.892 - 11897.305: 97.9787% ( 7) 00:07:52.128 11897.305 - 11947.717: 97.9987% ( 3) 00:07:52.128 11947.717 - 11998.129: 98.0319% ( 5) 00:07:52.128 11998.129 - 12048.542: 98.0519% ( 3) 00:07:52.128 12048.542 - 12098.954: 98.0718% ( 3) 00:07:52.128 12098.954 - 12149.366: 98.0918% ( 3) 00:07:52.128 12149.366 - 12199.778: 98.1117% ( 3) 00:07:52.128 12199.778 - 12250.191: 98.1316% ( 3) 00:07:52.128 12250.191 - 12300.603: 98.1582% ( 4) 00:07:52.128 12300.603 - 12351.015: 98.1782% ( 3) 00:07:52.128 12351.015 - 12401.428: 98.1981% ( 3) 00:07:52.128 12401.428 - 12451.840: 98.2181% ( 3) 00:07:52.128 12451.840 - 12502.252: 98.2380% ( 3) 00:07:52.128 12502.252 - 12552.665: 98.2646% ( 4) 00:07:52.128 12552.665 - 12603.077: 98.2846% ( 3) 00:07:52.128 12603.077 - 12653.489: 98.2979% ( 2) 00:07:52.128 12855.138 - 12905.551: 98.3178% ( 3) 00:07:52.128 12905.551 - 13006.375: 98.3843% ( 10) 00:07:52.128 13006.375 - 13107.200: 98.6037% ( 33) 00:07:52.128 13107.200 - 13208.025: 98.6702% ( 10) 00:07:52.128 13208.025 - 13308.849: 98.7035% ( 5) 00:07:52.128 13308.849 - 13409.674: 98.7234% ( 3) 00:07:52.128 13611.323 - 13712.148: 98.7500% ( 4) 00:07:52.128 13712.148 - 13812.972: 98.7766% ( 4) 00:07:52.128 13812.972 - 13913.797: 98.8032% ( 4) 00:07:52.128 13913.797 - 14014.622: 98.8298% ( 4) 00:07:52.128 14014.622 - 14115.446: 98.8697% ( 6) 00:07:52.128 14115.446 - 14216.271: 98.9162% ( 7) 00:07:52.128 14216.271 - 14317.095: 98.9495% ( 5) 00:07:52.128 14317.095 - 14417.920: 98.9827% ( 5) 00:07:52.128 14417.920 - 14518.745: 99.0226% ( 6) 00:07:52.128 14518.745 - 14619.569: 99.0559% ( 5) 00:07:52.128 14619.569 - 14720.394: 99.0957% ( 6) 00:07:52.128 14720.394 - 14821.218: 99.1356% ( 6) 00:07:52.128 14821.218 - 14922.043: 99.1489% ( 2) 00:07:52.128 22383.065 - 22483.889: 99.1755% ( 4) 00:07:52.128 22483.889 - 22584.714: 99.2021% ( 4) 00:07:52.128 22584.714 - 22685.538: 99.2287% ( 4) 00:07:52.128 22685.538 - 22786.363: 99.2553% ( 4) 00:07:52.128 22786.363 - 22887.188: 99.2753% ( 3) 00:07:52.128 22887.188 - 22988.012: 99.3085% ( 5) 00:07:52.128 22988.012 - 23088.837: 99.3351% ( 4) 00:07:52.128 23088.837 - 23189.662: 99.3617% ( 4) 00:07:52.128 23189.662 - 23290.486: 99.3883% ( 4) 00:07:52.128 23290.486 - 23391.311: 99.4149% ( 4) 00:07:52.128 23391.311 - 23492.135: 99.4282% ( 2) 00:07:52.128 23492.135 - 23592.960: 99.4415% ( 2) 00:07:52.128 23592.960 - 23693.785: 99.4548% ( 2) 00:07:52.128 23693.785 - 23794.609: 99.4747% ( 3) 00:07:52.128 23794.609 - 23895.434: 99.4880% ( 2) 00:07:52.128 23895.434 - 23996.258: 99.5013% ( 2) 00:07:52.128 23996.258 - 24097.083: 99.5146% ( 2) 00:07:52.128 24097.083 - 24197.908: 99.5279% ( 2) 00:07:52.128 24197.908 - 24298.732: 99.5412% ( 2) 00:07:52.128 24298.732 - 24399.557: 99.5612% ( 3) 00:07:52.128 24399.557 - 24500.382: 99.5745% ( 2) 00:07:52.128 29239.138 - 29440.788: 99.5944% ( 3) 00:07:52.128 30449.034 - 30650.683: 99.6277% ( 5) 00:07:52.128 30650.683 - 30852.332: 99.6809% ( 8) 00:07:52.128 30852.332 - 31053.982: 99.7274% ( 7) 00:07:52.128 31053.982 - 31255.631: 99.7872% ( 9) 00:07:52.128 31255.631 - 31457.280: 99.8338% ( 7) 00:07:52.128 31457.280 - 31658.929: 99.8936% ( 9) 00:07:52.128 31658.929 - 31860.578: 99.9468% ( 8) 00:07:52.128 31860.578 - 32062.228: 100.0000% ( 8) 00:07:52.128 00:07:52.128 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:52.128 ============================================================================== 00:07:52.128 Range in us Cumulative IO count 00:07:52.128 6301.538 - 6326.745: 0.0066% ( 1) 00:07:52.128 6326.745 - 6351.951: 0.0199% ( 2) 00:07:52.128 6377.157 - 6402.363: 0.0465% ( 4) 00:07:52.128 6402.363 - 6427.569: 0.0864% ( 6) 00:07:52.128 6427.569 - 6452.775: 0.1396% ( 8) 00:07:52.128 6452.775 - 6503.188: 0.3125% ( 26) 00:07:52.128 6503.188 - 6553.600: 0.6383% ( 49) 00:07:52.128 6553.600 - 6604.012: 1.0572% ( 63) 00:07:52.128 6604.012 - 6654.425: 1.7287% ( 101) 00:07:52.128 6654.425 - 6704.837: 2.6995% ( 146) 00:07:52.128 6704.837 - 6755.249: 4.0359% ( 201) 00:07:52.128 6755.249 - 6805.662: 5.5053% ( 221) 00:07:52.128 6805.662 - 6856.074: 7.7593% ( 339) 00:07:52.128 6856.074 - 6906.486: 10.4189% ( 400) 00:07:52.128 6906.486 - 6956.898: 13.2979% ( 433) 00:07:52.128 6956.898 - 7007.311: 15.8178% ( 379) 00:07:52.128 7007.311 - 7057.723: 17.9588% ( 322) 00:07:52.128 7057.723 - 7108.135: 20.3457% ( 359) 00:07:52.128 7108.135 - 7158.548: 21.8152% ( 221) 00:07:52.128 7158.548 - 7208.960: 23.2779% ( 220) 00:07:52.128 7208.960 - 7259.372: 24.7207% ( 217) 00:07:52.128 7259.372 - 7309.785: 25.9641% ( 187) 00:07:52.128 7309.785 - 7360.197: 26.8617% ( 135) 00:07:52.128 7360.197 - 7410.609: 27.8258% ( 145) 00:07:52.128 7410.609 - 7461.022: 28.5771% ( 113) 00:07:52.128 7461.022 - 7511.434: 29.1755% ( 90) 00:07:52.128 7511.434 - 7561.846: 29.8404% ( 100) 00:07:52.128 7561.846 - 7612.258: 30.6449% ( 121) 00:07:52.128 7612.258 - 7662.671: 31.4428% ( 120) 00:07:52.128 7662.671 - 7713.083: 32.3537% ( 137) 00:07:52.128 7713.083 - 7763.495: 33.1715% ( 123) 00:07:52.128 7763.495 - 7813.908: 34.0824% ( 137) 00:07:52.128 7813.908 - 7864.320: 35.1729% ( 164) 00:07:52.128 7864.320 - 7914.732: 36.2168% ( 157) 00:07:52.128 7914.732 - 7965.145: 37.3604% ( 172) 00:07:52.128 7965.145 - 8015.557: 39.0625% ( 256) 00:07:52.128 8015.557 - 8065.969: 41.6622% ( 391) 00:07:52.128 8065.969 - 8116.382: 43.9561% ( 345) 00:07:52.128 8116.382 - 8166.794: 46.7354% ( 418) 00:07:52.128 8166.794 - 8217.206: 49.6941% ( 445) 00:07:52.128 8217.206 - 8267.618: 54.5279% ( 727) 00:07:52.128 8267.618 - 8318.031: 58.3976% ( 582) 00:07:52.128 8318.031 - 8368.443: 61.1503% ( 414) 00:07:52.128 8368.443 - 8418.855: 63.2114% ( 310) 00:07:52.128 8418.855 - 8469.268: 64.5944% ( 208) 00:07:52.128 8469.268 - 8519.680: 65.7846% ( 179) 00:07:52.128 8519.680 - 8570.092: 66.7287% ( 142) 00:07:52.128 8570.092 - 8620.505: 67.6795% ( 143) 00:07:52.128 8620.505 - 8670.917: 68.4109% ( 110) 00:07:52.128 8670.917 - 8721.329: 68.9096% ( 75) 00:07:52.128 8721.329 - 8771.742: 69.5279% ( 93) 00:07:52.128 8771.742 - 8822.154: 70.1928% ( 100) 00:07:52.128 8822.154 - 8872.566: 70.6184% ( 64) 00:07:52.128 8872.566 - 8922.978: 71.0106% ( 59) 00:07:52.128 8922.978 - 8973.391: 71.4561% ( 67) 00:07:52.128 8973.391 - 9023.803: 72.1543% ( 105) 00:07:52.128 9023.803 - 9074.215: 72.6662% ( 77) 00:07:52.128 9074.215 - 9124.628: 73.3577% ( 104) 00:07:52.128 9124.628 - 9175.040: 73.9827% ( 94) 00:07:52.128 9175.040 - 9225.452: 74.5944% ( 92) 00:07:52.128 9225.452 - 9275.865: 75.3590% ( 115) 00:07:52.128 9275.865 - 9326.277: 75.9840% ( 94) 00:07:52.128 9326.277 - 9376.689: 76.7952% ( 122) 00:07:52.128 9376.689 - 9427.102: 77.7128% ( 138) 00:07:52.128 9427.102 - 9477.514: 78.7766% ( 160) 00:07:52.128 9477.514 - 9527.926: 80.2926% ( 228) 00:07:52.128 9527.926 - 9578.338: 81.5160% ( 184) 00:07:52.128 9578.338 - 9628.751: 82.8258% ( 197) 00:07:52.128 9628.751 - 9679.163: 84.1822% ( 204) 00:07:52.128 9679.163 - 9729.575: 85.5851% ( 211) 00:07:52.128 9729.575 - 9779.988: 86.8949% ( 197) 00:07:52.128 9779.988 - 9830.400: 88.5239% ( 245) 00:07:52.128 9830.400 - 9880.812: 89.5080% ( 148) 00:07:52.128 9880.812 - 9931.225: 90.1529% ( 97) 00:07:52.128 9931.225 - 9981.637: 90.7713% ( 93) 00:07:52.128 9981.637 - 10032.049: 91.3032% ( 80) 00:07:52.128 10032.049 - 10082.462: 91.9814% ( 102) 00:07:52.128 10082.462 - 10132.874: 92.7061% ( 109) 00:07:52.128 10132.874 - 10183.286: 93.2114% ( 76) 00:07:52.128 10183.286 - 10233.698: 93.6370% ( 64) 00:07:52.128 10233.698 - 10284.111: 94.0027% ( 55) 00:07:52.128 10284.111 - 10334.523: 94.2221% ( 33) 00:07:52.128 10334.523 - 10384.935: 94.4681% ( 37) 00:07:52.128 10384.935 - 10435.348: 94.6609% ( 29) 00:07:52.128 10435.348 - 10485.760: 95.0066% ( 52) 00:07:52.128 10485.760 - 10536.172: 95.2061% ( 30) 00:07:52.128 10536.172 - 10586.585: 95.4854% ( 42) 00:07:52.128 10586.585 - 10636.997: 95.6649% ( 27) 00:07:52.128 10636.997 - 10687.409: 95.8045% ( 21) 00:07:52.128 10687.409 - 10737.822: 95.9508% ( 22) 00:07:52.128 10737.822 - 10788.234: 96.0106% ( 9) 00:07:52.128 10788.234 - 10838.646: 96.0638% ( 8) 00:07:52.128 10838.646 - 10889.058: 96.1370% ( 11) 00:07:52.128 10889.058 - 10939.471: 96.2367% ( 15) 00:07:52.128 10939.471 - 10989.883: 96.3431% ( 16) 00:07:52.128 10989.883 - 11040.295: 96.4295% ( 13) 00:07:52.128 11040.295 - 11090.708: 96.4894% ( 9) 00:07:52.129 11090.708 - 11141.120: 96.5625% ( 11) 00:07:52.129 11141.120 - 11191.532: 96.6356% ( 11) 00:07:52.129 11191.532 - 11241.945: 96.8218% ( 28) 00:07:52.129 11241.945 - 11292.357: 96.9348% ( 17) 00:07:52.129 11292.357 - 11342.769: 96.9880% ( 8) 00:07:52.129 11342.769 - 11393.182: 97.0612% ( 11) 00:07:52.129 11393.182 - 11443.594: 97.1476% ( 13) 00:07:52.129 11443.594 - 11494.006: 97.2008% ( 8) 00:07:52.129 11494.006 - 11544.418: 97.2739% ( 11) 00:07:52.129 11544.418 - 11594.831: 97.3338% ( 9) 00:07:52.129 11594.831 - 11645.243: 97.4269% ( 14) 00:07:52.129 11645.243 - 11695.655: 97.5000% ( 11) 00:07:52.129 11695.655 - 11746.068: 97.5731% ( 11) 00:07:52.129 11746.068 - 11796.480: 97.6463% ( 11) 00:07:52.129 11796.480 - 11846.892: 97.7128% ( 10) 00:07:52.129 11846.892 - 11897.305: 97.7793% ( 10) 00:07:52.129 11897.305 - 11947.717: 97.8391% ( 9) 00:07:52.129 11947.717 - 11998.129: 97.9322% ( 14) 00:07:52.129 11998.129 - 12048.542: 98.0120% ( 12) 00:07:52.129 12048.542 - 12098.954: 98.1051% ( 14) 00:07:52.129 12098.954 - 12149.366: 98.1649% ( 9) 00:07:52.129 12149.366 - 12199.778: 98.1915% ( 4) 00:07:52.129 12199.778 - 12250.191: 98.2181% ( 4) 00:07:52.129 12250.191 - 12300.603: 98.2447% ( 4) 00:07:52.129 12300.603 - 12351.015: 98.2646% ( 3) 00:07:52.129 12351.015 - 12401.428: 98.2846% ( 3) 00:07:52.129 12401.428 - 12451.840: 98.2979% ( 2) 00:07:52.129 13006.375 - 13107.200: 98.3045% ( 1) 00:07:52.129 13107.200 - 13208.025: 98.3378% ( 5) 00:07:52.129 13208.025 - 13308.849: 98.5173% ( 27) 00:07:52.129 13308.849 - 13409.674: 98.6835% ( 25) 00:07:52.129 13409.674 - 13510.498: 98.7633% ( 12) 00:07:52.129 13510.498 - 13611.323: 98.8298% ( 10) 00:07:52.129 13611.323 - 13712.148: 98.8697% ( 6) 00:07:52.129 13712.148 - 13812.972: 98.9162% ( 7) 00:07:52.129 13812.972 - 13913.797: 98.9628% ( 7) 00:07:52.129 13913.797 - 14014.622: 99.0027% ( 6) 00:07:52.129 14014.622 - 14115.446: 99.0492% ( 7) 00:07:52.129 14115.446 - 14216.271: 99.0957% ( 7) 00:07:52.129 14216.271 - 14317.095: 99.1356% ( 6) 00:07:52.129 14317.095 - 14417.920: 99.1489% ( 2) 00:07:52.129 21878.942 - 21979.766: 99.1622% ( 2) 00:07:52.129 21979.766 - 22080.591: 99.1888% ( 4) 00:07:52.129 22080.591 - 22181.415: 99.2154% ( 4) 00:07:52.129 22181.415 - 22282.240: 99.2354% ( 3) 00:07:52.129 22282.240 - 22383.065: 99.2686% ( 5) 00:07:52.129 22383.065 - 22483.889: 99.2886% ( 3) 00:07:52.129 22483.889 - 22584.714: 99.3218% ( 5) 00:07:52.129 22584.714 - 22685.538: 99.3484% ( 4) 00:07:52.129 22685.538 - 22786.363: 99.3750% ( 4) 00:07:52.129 22786.363 - 22887.188: 99.4016% ( 4) 00:07:52.129 22887.188 - 22988.012: 99.4282% ( 4) 00:07:52.129 22988.012 - 23088.837: 99.4415% ( 2) 00:07:52.129 23088.837 - 23189.662: 99.4548% ( 2) 00:07:52.129 23189.662 - 23290.486: 99.4681% ( 2) 00:07:52.129 23290.486 - 23391.311: 99.4814% ( 2) 00:07:52.129 23391.311 - 23492.135: 99.4947% ( 2) 00:07:52.129 23492.135 - 23592.960: 99.5080% ( 2) 00:07:52.129 23592.960 - 23693.785: 99.5279% ( 3) 00:07:52.129 23693.785 - 23794.609: 99.5412% ( 2) 00:07:52.129 23794.609 - 23895.434: 99.5479% ( 1) 00:07:52.129 23895.434 - 23996.258: 99.5612% ( 2) 00:07:52.129 23996.258 - 24097.083: 99.5745% ( 2) 00:07:52.129 28029.243 - 28230.892: 99.7540% ( 27) 00:07:52.129 29239.138 - 29440.788: 99.8005% ( 7) 00:07:52.129 29440.788 - 29642.437: 99.8471% ( 7) 00:07:52.129 29642.437 - 29844.086: 99.8936% ( 7) 00:07:52.129 29844.086 - 30045.735: 99.9468% ( 8) 00:07:52.129 30045.735 - 30247.385: 99.9934% ( 7) 00:07:52.129 30247.385 - 30449.034: 100.0000% ( 1) 00:07:52.129 00:07:52.129 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:52.129 ============================================================================== 00:07:52.129 Range in us Cumulative IO count 00:07:52.129 6276.332 - 6301.538: 0.0066% ( 1) 00:07:52.129 6377.157 - 6402.363: 0.0266% ( 3) 00:07:52.129 6452.775 - 6503.188: 0.0997% ( 11) 00:07:52.129 6503.188 - 6553.600: 0.2460% ( 22) 00:07:52.129 6553.600 - 6604.012: 0.5120% ( 40) 00:07:52.129 6604.012 - 6654.425: 1.1104% ( 90) 00:07:52.129 6654.425 - 6704.837: 2.0080% ( 135) 00:07:52.129 6704.837 - 6755.249: 3.3045% ( 195) 00:07:52.129 6755.249 - 6805.662: 5.4122% ( 317) 00:07:52.129 6805.662 - 6856.074: 8.0386% ( 395) 00:07:52.129 6856.074 - 6906.486: 10.7513% ( 408) 00:07:52.129 6906.486 - 6956.898: 12.7194% ( 296) 00:07:52.129 6956.898 - 7007.311: 15.6848% ( 446) 00:07:52.129 7007.311 - 7057.723: 17.7194% ( 306) 00:07:52.129 7057.723 - 7108.135: 19.6609% ( 292) 00:07:52.129 7108.135 - 7158.548: 21.4894% ( 275) 00:07:52.129 7158.548 - 7208.960: 22.7859% ( 195) 00:07:52.129 7208.960 - 7259.372: 24.4681% ( 253) 00:07:52.129 7259.372 - 7309.785: 25.4787% ( 152) 00:07:52.129 7309.785 - 7360.197: 26.7354% ( 189) 00:07:52.129 7360.197 - 7410.609: 27.5798% ( 127) 00:07:52.129 7410.609 - 7461.022: 28.2779% ( 105) 00:07:52.129 7461.022 - 7511.434: 29.0426% ( 115) 00:07:52.129 7511.434 - 7561.846: 29.8338% ( 119) 00:07:52.129 7561.846 - 7612.258: 30.4987% ( 100) 00:07:52.129 7612.258 - 7662.671: 31.2832% ( 118) 00:07:52.129 7662.671 - 7713.083: 32.1277% ( 127) 00:07:52.129 7713.083 - 7763.495: 32.9588% ( 125) 00:07:52.129 7763.495 - 7813.908: 34.0758% ( 168) 00:07:52.129 7813.908 - 7864.320: 35.3590% ( 193) 00:07:52.129 7864.320 - 7914.732: 36.3098% ( 143) 00:07:52.129 7914.732 - 7965.145: 37.4202% ( 167) 00:07:52.129 7965.145 - 8015.557: 38.9229% ( 226) 00:07:52.129 8015.557 - 8065.969: 41.7487% ( 425) 00:07:52.129 8065.969 - 8116.382: 44.1223% ( 357) 00:07:52.129 8116.382 - 8166.794: 46.6955% ( 387) 00:07:52.129 8166.794 - 8217.206: 50.1396% ( 518) 00:07:52.129 8217.206 - 8267.618: 54.6875% ( 684) 00:07:52.129 8267.618 - 8318.031: 58.6636% ( 598) 00:07:52.129 8318.031 - 8368.443: 61.3165% ( 399) 00:07:52.129 8368.443 - 8418.855: 63.6237% ( 347) 00:07:52.129 8418.855 - 8469.268: 65.0731% ( 218) 00:07:52.129 8469.268 - 8519.680: 66.2899% ( 183) 00:07:52.129 8519.680 - 8570.092: 67.4136% ( 169) 00:07:52.129 8570.092 - 8620.505: 68.3644% ( 143) 00:07:52.129 8620.505 - 8670.917: 69.0226% ( 99) 00:07:52.129 8670.917 - 8721.329: 69.6941% ( 101) 00:07:52.129 8721.329 - 8771.742: 70.1928% ( 75) 00:07:52.129 8771.742 - 8822.154: 71.0173% ( 124) 00:07:52.129 8822.154 - 8872.566: 71.5093% ( 74) 00:07:52.129 8872.566 - 8922.978: 71.9747% ( 70) 00:07:52.129 8922.978 - 8973.391: 72.4069% ( 65) 00:07:52.129 8973.391 - 9023.803: 72.8059% ( 60) 00:07:52.129 9023.803 - 9074.215: 73.1316% ( 49) 00:07:52.129 9074.215 - 9124.628: 73.4707% ( 51) 00:07:52.129 9124.628 - 9175.040: 73.8098% ( 51) 00:07:52.129 9175.040 - 9225.452: 74.3351% ( 79) 00:07:52.129 9225.452 - 9275.865: 74.7008% ( 55) 00:07:52.129 9275.865 - 9326.277: 75.5984% ( 135) 00:07:52.129 9326.277 - 9376.689: 76.5027% ( 136) 00:07:52.129 9376.689 - 9427.102: 77.4668% ( 145) 00:07:52.129 9427.102 - 9477.514: 78.8098% ( 202) 00:07:52.129 9477.514 - 9527.926: 80.1064% ( 195) 00:07:52.129 9527.926 - 9578.338: 81.5824% ( 222) 00:07:52.129 9578.338 - 9628.751: 83.0253% ( 217) 00:07:52.129 9628.751 - 9679.163: 84.3949% ( 206) 00:07:52.129 9679.163 - 9729.575: 85.8311% ( 216) 00:07:52.129 9729.575 - 9779.988: 87.0479% ( 183) 00:07:52.129 9779.988 - 9830.400: 88.4309% ( 208) 00:07:52.129 9830.400 - 9880.812: 89.5944% ( 175) 00:07:52.129 9880.812 - 9931.225: 90.4854% ( 134) 00:07:52.129 9931.225 - 9981.637: 91.0838% ( 90) 00:07:52.129 9981.637 - 10032.049: 91.7553% ( 101) 00:07:52.129 10032.049 - 10082.462: 92.5399% ( 118) 00:07:52.129 10082.462 - 10132.874: 92.9854% ( 67) 00:07:52.129 10132.874 - 10183.286: 93.3910% ( 61) 00:07:52.129 10183.286 - 10233.698: 93.7101% ( 48) 00:07:52.129 10233.698 - 10284.111: 93.9761% ( 40) 00:07:52.129 10284.111 - 10334.523: 94.1822% ( 31) 00:07:52.129 10334.523 - 10384.935: 94.3816% ( 30) 00:07:52.129 10384.935 - 10435.348: 94.6809% ( 45) 00:07:52.129 10435.348 - 10485.760: 94.9734% ( 44) 00:07:52.129 10485.760 - 10536.172: 95.2261% ( 38) 00:07:52.129 10536.172 - 10586.585: 95.3989% ( 26) 00:07:52.129 10586.585 - 10636.997: 95.5253% ( 19) 00:07:52.129 10636.997 - 10687.409: 95.7114% ( 28) 00:07:52.129 10687.409 - 10737.822: 95.8444% ( 20) 00:07:52.130 10737.822 - 10788.234: 95.9043% ( 9) 00:07:52.130 10788.234 - 10838.646: 95.9707% ( 10) 00:07:52.130 10838.646 - 10889.058: 96.0173% ( 7) 00:07:52.130 10889.058 - 10939.471: 96.0505% ( 5) 00:07:52.130 10939.471 - 10989.883: 96.1636% ( 17) 00:07:52.130 10989.883 - 11040.295: 96.2699% ( 16) 00:07:52.130 11040.295 - 11090.708: 96.3697% ( 15) 00:07:52.130 11090.708 - 11141.120: 96.5426% ( 26) 00:07:52.130 11141.120 - 11191.532: 96.7620% ( 33) 00:07:52.130 11191.532 - 11241.945: 96.8883% ( 19) 00:07:52.130 11241.945 - 11292.357: 96.9814% ( 14) 00:07:52.130 11292.357 - 11342.769: 97.0811% ( 15) 00:07:52.130 11342.769 - 11393.182: 97.2074% ( 19) 00:07:52.130 11393.182 - 11443.594: 97.3936% ( 28) 00:07:52.130 11443.594 - 11494.006: 97.4801% ( 13) 00:07:52.130 11494.006 - 11544.418: 97.5399% ( 9) 00:07:52.130 11544.418 - 11594.831: 97.5997% ( 9) 00:07:52.130 11594.831 - 11645.243: 97.6529% ( 8) 00:07:52.130 11645.243 - 11695.655: 97.6995% ( 7) 00:07:52.130 11695.655 - 11746.068: 97.7327% ( 5) 00:07:52.130 11746.068 - 11796.480: 97.7660% ( 5) 00:07:52.130 11796.480 - 11846.892: 97.8125% ( 7) 00:07:52.130 11846.892 - 11897.305: 97.8657% ( 8) 00:07:52.130 11897.305 - 11947.717: 97.9122% ( 7) 00:07:52.130 11947.717 - 11998.129: 97.9388% ( 4) 00:07:52.130 11998.129 - 12048.542: 97.9588% ( 3) 00:07:52.130 12048.542 - 12098.954: 97.9787% ( 3) 00:07:52.130 12098.954 - 12149.366: 97.9987% ( 3) 00:07:52.130 12149.366 - 12199.778: 98.0053% ( 1) 00:07:52.130 12199.778 - 12250.191: 98.0253% ( 3) 00:07:52.130 12250.191 - 12300.603: 98.0585% ( 5) 00:07:52.130 12300.603 - 12351.015: 98.1051% ( 7) 00:07:52.130 12351.015 - 12401.428: 98.1316% ( 4) 00:07:52.130 12401.428 - 12451.840: 98.1782% ( 7) 00:07:52.130 12451.840 - 12502.252: 98.2447% ( 10) 00:07:52.130 12502.252 - 12552.665: 98.2713% ( 4) 00:07:52.130 12552.665 - 12603.077: 98.2846% ( 2) 00:07:52.130 12603.077 - 12653.489: 98.2912% ( 1) 00:07:52.130 12855.138 - 12905.551: 98.2979% ( 1) 00:07:52.130 13107.200 - 13208.025: 98.3311% ( 5) 00:07:52.130 13208.025 - 13308.849: 98.3777% ( 7) 00:07:52.130 13308.849 - 13409.674: 98.4508% ( 11) 00:07:52.130 13409.674 - 13510.498: 98.5439% ( 14) 00:07:52.130 13510.498 - 13611.323: 98.6436% ( 15) 00:07:52.130 13611.323 - 13712.148: 98.7301% ( 13) 00:07:52.130 13712.148 - 13812.972: 98.8564% ( 19) 00:07:52.130 13812.972 - 13913.797: 98.9295% ( 11) 00:07:52.130 13913.797 - 14014.622: 99.0027% ( 11) 00:07:52.130 14014.622 - 14115.446: 99.0824% ( 12) 00:07:52.130 14115.446 - 14216.271: 99.1290% ( 7) 00:07:52.130 14216.271 - 14317.095: 99.1489% ( 3) 00:07:52.130 20164.923 - 20265.748: 99.1556% ( 1) 00:07:52.130 20265.748 - 20366.572: 99.1755% ( 3) 00:07:52.130 20366.572 - 20467.397: 99.1888% ( 2) 00:07:52.130 20467.397 - 20568.222: 99.2088% ( 3) 00:07:52.130 20568.222 - 20669.046: 99.2287% ( 3) 00:07:52.130 20669.046 - 20769.871: 99.2420% ( 2) 00:07:52.130 20769.871 - 20870.695: 99.2620% ( 3) 00:07:52.130 20870.695 - 20971.520: 99.2753% ( 2) 00:07:52.130 20971.520 - 21072.345: 99.2952% ( 3) 00:07:52.130 21072.345 - 21173.169: 99.3152% ( 3) 00:07:52.130 21173.169 - 21273.994: 99.3351% ( 3) 00:07:52.130 21273.994 - 21374.818: 99.3484% ( 2) 00:07:52.130 21374.818 - 21475.643: 99.3617% ( 2) 00:07:52.130 21475.643 - 21576.468: 99.3816% ( 3) 00:07:52.130 21576.468 - 21677.292: 99.3949% ( 2) 00:07:52.130 21677.292 - 21778.117: 99.4082% ( 2) 00:07:52.130 21778.117 - 21878.942: 99.4282% ( 3) 00:07:52.130 21878.942 - 21979.766: 99.4481% ( 3) 00:07:52.130 21979.766 - 22080.591: 99.4681% ( 3) 00:07:52.130 22080.591 - 22181.415: 99.4814% ( 2) 00:07:52.130 22181.415 - 22282.240: 99.4947% ( 2) 00:07:52.130 22282.240 - 22383.065: 99.5146% ( 3) 00:07:52.130 22383.065 - 22483.889: 99.5279% ( 2) 00:07:52.130 22483.889 - 22584.714: 99.5479% ( 3) 00:07:52.130 22584.714 - 22685.538: 99.5678% ( 3) 00:07:52.130 22685.538 - 22786.363: 99.5745% ( 1) 00:07:52.130 26819.348 - 27020.997: 99.5878% ( 2) 00:07:52.130 27020.997 - 27222.646: 99.6476% ( 9) 00:07:52.130 27222.646 - 27424.295: 99.6941% ( 7) 00:07:52.130 27424.295 - 27625.945: 99.7473% ( 8) 00:07:52.130 27625.945 - 27827.594: 99.8005% ( 8) 00:07:52.130 27827.594 - 28029.243: 99.8604% ( 9) 00:07:52.130 28029.243 - 28230.892: 99.9136% ( 8) 00:07:52.130 28230.892 - 28432.542: 99.9668% ( 8) 00:07:52.130 28432.542 - 28634.191: 100.0000% ( 5) 00:07:52.130 00:07:52.130 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:52.130 ============================================================================== 00:07:52.130 Range in us Cumulative IO count 00:07:52.130 6175.508 - 6200.714: 0.0066% ( 1) 00:07:52.130 6225.920 - 6251.126: 0.0133% ( 1) 00:07:52.130 6326.745 - 6351.951: 0.0199% ( 1) 00:07:52.130 6351.951 - 6377.157: 0.0266% ( 1) 00:07:52.130 6377.157 - 6402.363: 0.0399% ( 2) 00:07:52.130 6402.363 - 6427.569: 0.0598% ( 3) 00:07:52.130 6427.569 - 6452.775: 0.0931% ( 5) 00:07:52.130 6452.775 - 6503.188: 0.2061% ( 17) 00:07:52.130 6503.188 - 6553.600: 0.3657% ( 24) 00:07:52.130 6553.600 - 6604.012: 0.9176% ( 83) 00:07:52.130 6604.012 - 6654.425: 1.7620% ( 127) 00:07:52.130 6654.425 - 6704.837: 2.7726% ( 152) 00:07:52.130 6704.837 - 6755.249: 4.2420% ( 221) 00:07:52.130 6755.249 - 6805.662: 7.3005% ( 460) 00:07:52.130 6805.662 - 6856.074: 9.3484% ( 308) 00:07:52.130 6856.074 - 6906.486: 11.6090% ( 340) 00:07:52.130 6906.486 - 6956.898: 13.8032% ( 330) 00:07:52.130 6956.898 - 7007.311: 15.5186% ( 258) 00:07:52.130 7007.311 - 7057.723: 17.1011% ( 238) 00:07:52.130 7057.723 - 7108.135: 19.1755% ( 312) 00:07:52.130 7108.135 - 7158.548: 21.3032% ( 320) 00:07:52.130 7158.548 - 7208.960: 22.5532% ( 188) 00:07:52.130 7208.960 - 7259.372: 23.7633% ( 182) 00:07:52.130 7259.372 - 7309.785: 24.8471% ( 163) 00:07:52.130 7309.785 - 7360.197: 25.5186% ( 101) 00:07:52.130 7360.197 - 7410.609: 26.5758% ( 159) 00:07:52.130 7410.609 - 7461.022: 27.4202% ( 127) 00:07:52.130 7461.022 - 7511.434: 28.3178% ( 135) 00:07:52.130 7511.434 - 7561.846: 29.3949% ( 162) 00:07:52.130 7561.846 - 7612.258: 30.1197% ( 109) 00:07:52.130 7612.258 - 7662.671: 31.2633% ( 172) 00:07:52.130 7662.671 - 7713.083: 32.0013% ( 111) 00:07:52.130 7713.083 - 7763.495: 32.9122% ( 137) 00:07:52.130 7763.495 - 7813.908: 34.1755% ( 190) 00:07:52.130 7813.908 - 7864.320: 35.2527% ( 162) 00:07:52.130 7864.320 - 7914.732: 36.2699% ( 153) 00:07:52.130 7914.732 - 7965.145: 37.5465% ( 192) 00:07:52.130 7965.145 - 8015.557: 39.4548% ( 287) 00:07:52.130 8015.557 - 8065.969: 41.9681% ( 378) 00:07:52.130 8065.969 - 8116.382: 44.8537% ( 434) 00:07:52.130 8116.382 - 8166.794: 47.3803% ( 380) 00:07:52.130 8166.794 - 8217.206: 50.8910% ( 528) 00:07:52.130 8217.206 - 8267.618: 56.0173% ( 771) 00:07:52.130 8267.618 - 8318.031: 59.3684% ( 504) 00:07:52.130 8318.031 - 8368.443: 62.1011% ( 411) 00:07:52.130 8368.443 - 8418.855: 63.8763% ( 267) 00:07:52.130 8418.855 - 8469.268: 65.5452% ( 251) 00:07:52.130 8469.268 - 8519.680: 66.8551% ( 197) 00:07:52.130 8519.680 - 8570.092: 67.8391% ( 148) 00:07:52.130 8570.092 - 8620.505: 68.6702% ( 125) 00:07:52.130 8620.505 - 8670.917: 69.2819% ( 92) 00:07:52.130 8670.917 - 8721.329: 69.6809% ( 60) 00:07:52.130 8721.329 - 8771.742: 69.9734% ( 44) 00:07:52.130 8771.742 - 8822.154: 70.3391% ( 55) 00:07:52.130 8822.154 - 8872.566: 70.9043% ( 85) 00:07:52.130 8872.566 - 8922.978: 71.5758% ( 101) 00:07:52.130 8922.978 - 8973.391: 72.0146% ( 66) 00:07:52.130 8973.391 - 9023.803: 72.2806% ( 40) 00:07:52.130 9023.803 - 9074.215: 72.6130% ( 50) 00:07:52.130 9074.215 - 9124.628: 73.1184% ( 76) 00:07:52.130 9124.628 - 9175.040: 73.5705% ( 68) 00:07:52.130 9175.040 - 9225.452: 74.0957% ( 79) 00:07:52.130 9225.452 - 9275.865: 74.6875% ( 89) 00:07:52.130 9275.865 - 9326.277: 75.5652% ( 132) 00:07:52.130 9326.277 - 9376.689: 76.2168% ( 98) 00:07:52.130 9376.689 - 9427.102: 77.1941% ( 147) 00:07:52.130 9427.102 - 9477.514: 78.3045% ( 167) 00:07:52.130 9477.514 - 9527.926: 79.8404% ( 231) 00:07:52.130 9527.926 - 9578.338: 81.4229% ( 238) 00:07:52.130 9578.338 - 9628.751: 82.9322% ( 227) 00:07:52.130 9628.751 - 9679.163: 84.5146% ( 238) 00:07:52.130 9679.163 - 9729.575: 86.2101% ( 255) 00:07:52.130 9729.575 - 9779.988: 87.6064% ( 210) 00:07:52.130 9779.988 - 9830.400: 88.7234% ( 168) 00:07:52.130 9830.400 - 9880.812: 90.0199% ( 195) 00:07:52.130 9880.812 - 9931.225: 91.0572% ( 156) 00:07:52.130 9931.225 - 9981.637: 91.7553% ( 105) 00:07:52.130 9981.637 - 10032.049: 92.2739% ( 78) 00:07:52.130 10032.049 - 10082.462: 92.7527% ( 72) 00:07:52.130 10082.462 - 10132.874: 93.1051% ( 53) 00:07:52.130 10132.874 - 10183.286: 93.4840% ( 57) 00:07:52.130 10183.286 - 10233.698: 93.8298% ( 52) 00:07:52.130 10233.698 - 10284.111: 93.9960% ( 25) 00:07:52.130 10284.111 - 10334.523: 94.2088% ( 32) 00:07:52.130 10334.523 - 10384.935: 94.3883% ( 27) 00:07:52.130 10384.935 - 10435.348: 94.6410% ( 38) 00:07:52.130 10435.348 - 10485.760: 94.8271% ( 28) 00:07:52.130 10485.760 - 10536.172: 95.0066% ( 27) 00:07:52.130 10536.172 - 10586.585: 95.1862% ( 27) 00:07:52.130 10586.585 - 10636.997: 95.3457% ( 24) 00:07:52.130 10636.997 - 10687.409: 95.4987% ( 23) 00:07:52.130 10687.409 - 10737.822: 95.7646% ( 40) 00:07:52.130 10737.822 - 10788.234: 95.9707% ( 31) 00:07:52.130 10788.234 - 10838.646: 96.1902% ( 33) 00:07:52.130 10838.646 - 10889.058: 96.3298% ( 21) 00:07:52.130 10889.058 - 10939.471: 96.4894% ( 24) 00:07:52.130 10939.471 - 10989.883: 96.6622% ( 26) 00:07:52.130 10989.883 - 11040.295: 96.7487% ( 13) 00:07:52.130 11040.295 - 11090.708: 96.8484% ( 15) 00:07:52.130 11090.708 - 11141.120: 96.9149% ( 10) 00:07:52.130 11141.120 - 11191.532: 96.9947% ( 12) 00:07:52.130 11191.532 - 11241.945: 97.0545% ( 9) 00:07:52.130 11241.945 - 11292.357: 97.1011% ( 7) 00:07:52.130 11292.357 - 11342.769: 97.1609% ( 9) 00:07:52.130 11342.769 - 11393.182: 97.2074% ( 7) 00:07:52.131 11393.182 - 11443.594: 97.2606% ( 8) 00:07:52.131 11443.594 - 11494.006: 97.2872% ( 4) 00:07:52.131 11494.006 - 11544.418: 97.3338% ( 7) 00:07:52.131 11544.418 - 11594.831: 97.3537% ( 3) 00:07:52.131 11594.831 - 11645.243: 97.3670% ( 2) 00:07:52.131 11645.243 - 11695.655: 97.3803% ( 2) 00:07:52.131 11695.655 - 11746.068: 97.3936% ( 2) 00:07:52.131 11746.068 - 11796.480: 97.4069% ( 2) 00:07:52.131 11796.480 - 11846.892: 97.4535% ( 7) 00:07:52.131 11846.892 - 11897.305: 97.5066% ( 8) 00:07:52.131 11897.305 - 11947.717: 97.7128% ( 31) 00:07:52.131 11947.717 - 11998.129: 97.7726% ( 9) 00:07:52.131 11998.129 - 12048.542: 97.7926% ( 3) 00:07:52.131 12048.542 - 12098.954: 97.8723% ( 12) 00:07:52.131 12098.954 - 12149.366: 97.9455% ( 11) 00:07:52.131 12149.366 - 12199.778: 98.0253% ( 12) 00:07:52.131 12199.778 - 12250.191: 98.1250% ( 15) 00:07:52.131 12250.191 - 12300.603: 98.1582% ( 5) 00:07:52.131 12300.603 - 12351.015: 98.1649% ( 1) 00:07:52.131 12351.015 - 12401.428: 98.1782% ( 2) 00:07:52.131 12401.428 - 12451.840: 98.1915% ( 2) 00:07:52.131 12451.840 - 12502.252: 98.2048% ( 2) 00:07:52.131 12502.252 - 12552.665: 98.2181% ( 2) 00:07:52.131 12552.665 - 12603.077: 98.2247% ( 1) 00:07:52.131 12603.077 - 12653.489: 98.2380% ( 2) 00:07:52.131 12653.489 - 12703.902: 98.2513% ( 2) 00:07:52.131 12703.902 - 12754.314: 98.2646% ( 2) 00:07:52.131 12754.314 - 12804.726: 98.2779% ( 2) 00:07:52.131 12804.726 - 12855.138: 98.2912% ( 2) 00:07:52.131 12855.138 - 12905.551: 98.3112% ( 3) 00:07:52.131 12905.551 - 13006.375: 98.3577% ( 7) 00:07:52.131 13006.375 - 13107.200: 98.4043% ( 7) 00:07:52.131 13107.200 - 13208.025: 98.4441% ( 6) 00:07:52.131 13208.025 - 13308.849: 98.4907% ( 7) 00:07:52.131 13308.849 - 13409.674: 98.5306% ( 6) 00:07:52.131 13409.674 - 13510.498: 98.5771% ( 7) 00:07:52.131 13510.498 - 13611.323: 98.6170% ( 6) 00:07:52.131 13611.323 - 13712.148: 98.6503% ( 5) 00:07:52.131 13712.148 - 13812.972: 98.7766% ( 19) 00:07:52.131 13812.972 - 13913.797: 98.8165% ( 6) 00:07:52.131 13913.797 - 14014.622: 98.8630% ( 7) 00:07:52.131 14014.622 - 14115.446: 98.9029% ( 6) 00:07:52.131 14115.446 - 14216.271: 98.9428% ( 6) 00:07:52.131 14216.271 - 14317.095: 98.9894% ( 7) 00:07:52.131 14317.095 - 14417.920: 99.0293% ( 6) 00:07:52.131 14417.920 - 14518.745: 99.0625% ( 5) 00:07:52.131 14518.745 - 14619.569: 99.1024% ( 6) 00:07:52.131 14619.569 - 14720.394: 99.1423% ( 6) 00:07:52.131 14720.394 - 14821.218: 99.1489% ( 1) 00:07:52.131 20467.397 - 20568.222: 99.1556% ( 1) 00:07:52.131 20769.871 - 20870.695: 99.1755% ( 3) 00:07:52.131 20870.695 - 20971.520: 99.1888% ( 2) 00:07:52.131 20971.520 - 21072.345: 99.2021% ( 2) 00:07:52.131 21072.345 - 21173.169: 99.2287% ( 4) 00:07:52.131 21173.169 - 21273.994: 99.2553% ( 4) 00:07:52.131 21273.994 - 21374.818: 99.2753% ( 3) 00:07:52.131 21374.818 - 21475.643: 99.2952% ( 3) 00:07:52.131 21475.643 - 21576.468: 99.3218% ( 4) 00:07:52.131 21576.468 - 21677.292: 99.3484% ( 4) 00:07:52.131 21677.292 - 21778.117: 99.3816% ( 5) 00:07:52.131 21778.117 - 21878.942: 99.4149% ( 5) 00:07:52.131 21878.942 - 21979.766: 99.4282% ( 2) 00:07:52.131 21979.766 - 22080.591: 99.4415% ( 2) 00:07:52.131 22080.591 - 22181.415: 99.4614% ( 3) 00:07:52.131 22181.415 - 22282.240: 99.4747% ( 2) 00:07:52.131 22282.240 - 22383.065: 99.4880% ( 2) 00:07:52.131 22383.065 - 22483.889: 99.5080% ( 3) 00:07:52.131 22483.889 - 22584.714: 99.5213% ( 2) 00:07:52.131 22584.714 - 22685.538: 99.5412% ( 3) 00:07:52.131 22685.538 - 22786.363: 99.5612% ( 3) 00:07:52.131 22786.363 - 22887.188: 99.5745% ( 2) 00:07:52.131 24298.732 - 24399.557: 99.5878% ( 2) 00:07:52.131 24399.557 - 24500.382: 99.6410% ( 8) 00:07:52.131 24500.382 - 24601.206: 99.6941% ( 8) 00:07:52.131 24601.206 - 24702.031: 99.7274% ( 5) 00:07:52.131 24702.031 - 24802.855: 99.7739% ( 7) 00:07:52.131 24802.855 - 24903.680: 99.8205% ( 7) 00:07:52.131 24903.680 - 25004.505: 99.8936% ( 11) 00:07:52.131 25004.505 - 25105.329: 99.9136% ( 3) 00:07:52.131 25105.329 - 25206.154: 99.9335% ( 3) 00:07:52.131 25206.154 - 25306.978: 99.9601% ( 4) 00:07:52.131 25306.978 - 25407.803: 99.9734% ( 2) 00:07:52.131 25407.803 - 25508.628: 99.9934% ( 3) 00:07:52.131 25508.628 - 25609.452: 100.0000% ( 1) 00:07:52.131 00:07:52.388 14:00:53 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:52.389 00:07:52.389 real 0m2.510s 00:07:52.389 user 0m2.208s 00:07:52.389 sys 0m0.199s 00:07:52.389 ************************************ 00:07:52.389 END TEST nvme_perf 00:07:52.389 14:00:53 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.389 14:00:53 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:52.389 ************************************ 00:07:52.389 14:00:53 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:52.389 14:00:53 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:52.389 14:00:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.389 14:00:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.389 ************************************ 00:07:52.389 START TEST nvme_hello_world 00:07:52.389 ************************************ 00:07:52.389 14:00:53 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:52.389 Initializing NVMe Controllers 00:07:52.389 Attached to 0000:00:10.0 00:07:52.389 Namespace ID: 1 size: 6GB 00:07:52.389 Attached to 0000:00:11.0 00:07:52.389 Namespace ID: 1 size: 5GB 00:07:52.389 Attached to 0000:00:13.0 00:07:52.389 Namespace ID: 1 size: 1GB 00:07:52.389 Attached to 0000:00:12.0 00:07:52.389 Namespace ID: 1 size: 4GB 00:07:52.389 Namespace ID: 2 size: 4GB 00:07:52.389 Namespace ID: 3 size: 4GB 00:07:52.389 Initialization complete. 00:07:52.389 INFO: using host memory buffer for IO 00:07:52.389 Hello world! 00:07:52.389 INFO: using host memory buffer for IO 00:07:52.389 Hello world! 00:07:52.389 INFO: using host memory buffer for IO 00:07:52.389 Hello world! 00:07:52.389 INFO: using host memory buffer for IO 00:07:52.389 Hello world! 00:07:52.389 INFO: using host memory buffer for IO 00:07:52.389 Hello world! 00:07:52.389 INFO: using host memory buffer for IO 00:07:52.389 Hello world! 00:07:52.646 00:07:52.646 real 0m0.234s 00:07:52.646 user 0m0.079s 00:07:52.646 sys 0m0.105s 00:07:52.646 14:00:54 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.646 ************************************ 00:07:52.646 END TEST nvme_hello_world 00:07:52.646 ************************************ 00:07:52.646 14:00:54 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:52.646 14:00:54 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:52.646 14:00:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.646 14:00:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.646 14:00:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.646 ************************************ 00:07:52.646 START TEST nvme_sgl 00:07:52.646 ************************************ 00:07:52.646 14:00:54 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:52.646 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:52.646 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:52.646 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:52.904 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:52.904 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:52.904 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:52.904 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:52.904 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:52.904 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:52.904 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:52.904 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:52.904 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:52.904 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:52.904 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:52.904 NVMe Readv/Writev Request test 00:07:52.904 Attached to 0000:00:10.0 00:07:52.904 Attached to 0000:00:11.0 00:07:52.904 Attached to 0000:00:13.0 00:07:52.904 Attached to 0000:00:12.0 00:07:52.904 0000:00:10.0: build_io_request_2 test passed 00:07:52.904 0000:00:10.0: build_io_request_4 test passed 00:07:52.904 0000:00:10.0: build_io_request_5 test passed 00:07:52.904 0000:00:10.0: build_io_request_6 test passed 00:07:52.904 0000:00:10.0: build_io_request_7 test passed 00:07:52.904 0000:00:10.0: build_io_request_10 test passed 00:07:52.904 0000:00:11.0: build_io_request_2 test passed 00:07:52.904 0000:00:11.0: build_io_request_4 test passed 00:07:52.904 0000:00:11.0: build_io_request_5 test passed 00:07:52.904 0000:00:11.0: build_io_request_6 test passed 00:07:52.904 0000:00:11.0: build_io_request_7 test passed 00:07:52.904 0000:00:11.0: build_io_request_10 test passed 00:07:52.904 Cleaning up... 00:07:52.904 00:07:52.904 real 0m0.281s 00:07:52.904 user 0m0.146s 00:07:52.904 sys 0m0.091s 00:07:52.904 14:00:54 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.904 ************************************ 00:07:52.904 END TEST nvme_sgl 00:07:52.904 ************************************ 00:07:52.904 14:00:54 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:52.904 14:00:54 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:52.904 14:00:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.904 14:00:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.904 14:00:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.904 ************************************ 00:07:52.904 START TEST nvme_e2edp 00:07:52.904 ************************************ 00:07:52.904 14:00:54 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:53.162 NVMe Write/Read with End-to-End data protection test 00:07:53.162 Attached to 0000:00:10.0 00:07:53.162 Attached to 0000:00:11.0 00:07:53.162 Attached to 0000:00:13.0 00:07:53.162 Attached to 0000:00:12.0 00:07:53.162 Cleaning up... 00:07:53.162 00:07:53.162 real 0m0.249s 00:07:53.162 user 0m0.078s 00:07:53.162 sys 0m0.123s 00:07:53.162 14:00:54 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.162 ************************************ 00:07:53.162 END TEST nvme_e2edp 00:07:53.162 ************************************ 00:07:53.162 14:00:54 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:53.162 14:00:54 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:53.162 14:00:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.162 14:00:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.162 14:00:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.162 ************************************ 00:07:53.162 START TEST nvme_reserve 00:07:53.162 ************************************ 00:07:53.162 14:00:54 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:53.419 ===================================================== 00:07:53.419 NVMe Controller at PCI bus 0, device 16, function 0 00:07:53.419 ===================================================== 00:07:53.419 Reservations: Not Supported 00:07:53.419 ===================================================== 00:07:53.419 NVMe Controller at PCI bus 0, device 17, function 0 00:07:53.419 ===================================================== 00:07:53.419 Reservations: Not Supported 00:07:53.419 ===================================================== 00:07:53.419 NVMe Controller at PCI bus 0, device 19, function 0 00:07:53.419 ===================================================== 00:07:53.419 Reservations: Not Supported 00:07:53.419 ===================================================== 00:07:53.419 NVMe Controller at PCI bus 0, device 18, function 0 00:07:53.419 ===================================================== 00:07:53.419 Reservations: Not Supported 00:07:53.419 Reservation test passed 00:07:53.419 00:07:53.420 real 0m0.206s 00:07:53.420 user 0m0.073s 00:07:53.420 sys 0m0.091s 00:07:53.420 14:00:55 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.420 ************************************ 00:07:53.420 END TEST nvme_reserve 00:07:53.420 ************************************ 00:07:53.420 14:00:55 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:53.420 14:00:55 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:53.420 14:00:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.420 14:00:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.420 14:00:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.420 ************************************ 00:07:53.420 START TEST nvme_err_injection 00:07:53.420 ************************************ 00:07:53.420 14:00:55 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:53.677 NVMe Error Injection test 00:07:53.677 Attached to 0000:00:10.0 00:07:53.677 Attached to 0000:00:11.0 00:07:53.677 Attached to 0000:00:13.0 00:07:53.677 Attached to 0000:00:12.0 00:07:53.677 0000:00:12.0: get features failed as expected 00:07:53.677 0000:00:10.0: get features failed as expected 00:07:53.677 0000:00:11.0: get features failed as expected 00:07:53.677 0000:00:13.0: get features failed as expected 00:07:53.677 0000:00:10.0: get features successfully as expected 00:07:53.677 0000:00:11.0: get features successfully as expected 00:07:53.677 0000:00:13.0: get features successfully as expected 00:07:53.677 0000:00:12.0: get features successfully as expected 00:07:53.677 0000:00:10.0: read failed as expected 00:07:53.677 0000:00:11.0: read failed as expected 00:07:53.677 0000:00:13.0: read failed as expected 00:07:53.677 0000:00:12.0: read failed as expected 00:07:53.677 0000:00:10.0: read successfully as expected 00:07:53.677 0000:00:11.0: read successfully as expected 00:07:53.677 0000:00:13.0: read successfully as expected 00:07:53.677 0000:00:12.0: read successfully as expected 00:07:53.677 Cleaning up... 00:07:53.677 00:07:53.677 real 0m0.221s 00:07:53.677 user 0m0.085s 00:07:53.677 sys 0m0.091s 00:07:53.677 14:00:55 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.677 14:00:55 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:53.677 ************************************ 00:07:53.677 END TEST nvme_err_injection 00:07:53.677 ************************************ 00:07:53.677 14:00:55 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:53.677 14:00:55 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:53.677 14:00:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.677 14:00:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.677 ************************************ 00:07:53.677 START TEST nvme_overhead 00:07:53.677 ************************************ 00:07:53.677 14:00:55 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:55.103 Initializing NVMe Controllers 00:07:55.103 Attached to 0000:00:10.0 00:07:55.103 Attached to 0000:00:11.0 00:07:55.103 Attached to 0000:00:13.0 00:07:55.103 Attached to 0000:00:12.0 00:07:55.103 Initialization complete. Launching workers. 00:07:55.103 submit (in ns) avg, min, max = 11504.3, 9823.8, 315042.3 00:07:55.103 complete (in ns) avg, min, max = 7740.2, 7245.4, 96841.5 00:07:55.103 00:07:55.103 Submit histogram 00:07:55.103 ================ 00:07:55.103 Range in us Cumulative Count 00:07:55.103 9.797 - 9.846: 0.0084% ( 1) 00:07:55.103 9.846 - 9.895: 0.0253% ( 2) 00:07:55.103 9.895 - 9.945: 0.0337% ( 1) 00:07:55.103 10.043 - 10.092: 0.0421% ( 1) 00:07:55.103 10.486 - 10.535: 0.0505% ( 1) 00:07:55.103 10.585 - 10.634: 0.0589% ( 1) 00:07:55.103 10.683 - 10.732: 0.0673% ( 1) 00:07:55.103 10.732 - 10.782: 0.1263% ( 7) 00:07:55.103 10.782 - 10.831: 0.3535% ( 27) 00:07:55.103 10.831 - 10.880: 1.2962% ( 112) 00:07:55.103 10.880 - 10.929: 3.9643% ( 317) 00:07:55.103 10.929 - 10.978: 8.5851% ( 549) 00:07:55.103 10.978 - 11.028: 15.2176% ( 788) 00:07:55.103 11.028 - 11.077: 23.1546% ( 943) 00:07:55.103 11.077 - 11.126: 31.8744% ( 1036) 00:07:55.103 11.126 - 11.175: 40.9477% ( 1078) 00:07:55.103 11.175 - 11.225: 49.5497% ( 1022) 00:07:55.103 11.225 - 11.274: 57.3605% ( 928) 00:07:55.103 11.274 - 11.323: 63.8330% ( 769) 00:07:55.103 11.323 - 11.372: 68.8073% ( 591) 00:07:55.103 11.372 - 11.422: 72.7717% ( 471) 00:07:55.103 11.422 - 11.471: 75.8354% ( 364) 00:07:55.103 11.471 - 11.520: 78.0153% ( 259) 00:07:55.103 11.520 - 11.569: 79.7744% ( 209) 00:07:55.103 11.569 - 11.618: 81.1632% ( 165) 00:07:55.103 11.618 - 11.668: 82.4678% ( 155) 00:07:55.103 11.668 - 11.717: 83.8650% ( 166) 00:07:55.103 11.717 - 11.766: 85.0433% ( 140) 00:07:55.103 11.766 - 11.815: 86.2469% ( 143) 00:07:55.103 11.815 - 11.865: 87.4590% ( 144) 00:07:55.103 11.865 - 11.914: 88.7636% ( 155) 00:07:55.103 11.914 - 11.963: 89.7904% ( 122) 00:07:55.103 11.963 - 12.012: 90.9604% ( 139) 00:07:55.103 12.012 - 12.062: 91.8946% ( 111) 00:07:55.103 12.062 - 12.111: 92.7700% ( 104) 00:07:55.103 12.111 - 12.160: 93.5022% ( 87) 00:07:55.103 12.160 - 12.209: 94.1840% ( 81) 00:07:55.103 12.209 - 12.258: 94.6301% ( 53) 00:07:55.103 12.258 - 12.308: 95.2193% ( 70) 00:07:55.103 12.308 - 12.357: 95.5980% ( 45) 00:07:55.103 12.357 - 12.406: 95.8084% ( 25) 00:07:55.103 12.406 - 12.455: 95.9347% ( 15) 00:07:55.103 12.455 - 12.505: 96.0525% ( 14) 00:07:55.103 12.505 - 12.554: 96.1619% ( 13) 00:07:55.103 12.554 - 12.603: 96.2714% ( 13) 00:07:55.103 12.603 - 12.702: 96.3639% ( 11) 00:07:55.103 12.702 - 12.800: 96.5070% ( 17) 00:07:55.103 12.800 - 12.898: 96.5491% ( 5) 00:07:55.103 12.898 - 12.997: 96.6417% ( 11) 00:07:55.103 12.997 - 13.095: 96.7174% ( 9) 00:07:55.103 13.095 - 13.194: 96.8016% ( 10) 00:07:55.103 13.194 - 13.292: 96.9110% ( 13) 00:07:55.103 13.292 - 13.391: 97.0205% ( 13) 00:07:55.103 13.391 - 13.489: 97.1383% ( 14) 00:07:55.103 13.489 - 13.588: 97.2309% ( 11) 00:07:55.103 13.588 - 13.686: 97.3571% ( 15) 00:07:55.103 13.686 - 13.785: 97.4581% ( 12) 00:07:55.103 13.785 - 13.883: 97.5255% ( 8) 00:07:55.103 13.883 - 13.982: 97.6012% ( 9) 00:07:55.103 13.982 - 14.080: 97.6685% ( 8) 00:07:55.103 14.080 - 14.178: 97.7443% ( 9) 00:07:55.103 14.178 - 14.277: 97.8116% ( 8) 00:07:55.103 14.277 - 14.375: 97.8621% ( 6) 00:07:55.103 14.375 - 14.474: 97.8958% ( 4) 00:07:55.103 14.474 - 14.572: 97.9631% ( 8) 00:07:55.103 14.572 - 14.671: 98.0221% ( 7) 00:07:55.103 14.671 - 14.769: 98.0641% ( 5) 00:07:55.103 14.769 - 14.868: 98.0978% ( 4) 00:07:55.103 14.868 - 14.966: 98.1399% ( 5) 00:07:55.103 14.966 - 15.065: 98.1820% ( 5) 00:07:55.103 15.065 - 15.163: 98.2241% ( 5) 00:07:55.103 15.163 - 15.262: 98.2577% ( 4) 00:07:55.103 15.262 - 15.360: 98.2914% ( 4) 00:07:55.103 15.360 - 15.458: 98.3082% ( 2) 00:07:55.103 15.655 - 15.754: 98.3419% ( 4) 00:07:55.103 15.754 - 15.852: 98.3587% ( 2) 00:07:55.103 15.852 - 15.951: 98.4008% ( 5) 00:07:55.103 15.951 - 16.049: 98.4513% ( 6) 00:07:55.103 16.148 - 16.246: 98.4766% ( 3) 00:07:55.103 16.246 - 16.345: 98.5018% ( 3) 00:07:55.103 16.443 - 16.542: 98.5271% ( 3) 00:07:55.103 16.542 - 16.640: 98.5439% ( 2) 00:07:55.103 16.640 - 16.738: 98.6533% ( 13) 00:07:55.103 16.738 - 16.837: 98.7291% ( 9) 00:07:55.103 16.837 - 16.935: 98.7880% ( 7) 00:07:55.103 16.935 - 17.034: 98.9058% ( 14) 00:07:55.103 17.034 - 17.132: 98.9563% ( 6) 00:07:55.103 17.132 - 17.231: 98.9984% ( 5) 00:07:55.103 17.231 - 17.329: 99.0826% ( 10) 00:07:55.103 17.329 - 17.428: 99.1331% ( 6) 00:07:55.103 17.428 - 17.526: 99.2341% ( 12) 00:07:55.103 17.526 - 17.625: 99.2677% ( 4) 00:07:55.103 17.625 - 17.723: 99.3603% ( 11) 00:07:55.103 17.723 - 17.822: 99.4361% ( 9) 00:07:55.103 17.822 - 17.920: 99.5118% ( 9) 00:07:55.103 17.920 - 18.018: 99.5792% ( 8) 00:07:55.103 18.018 - 18.117: 99.6044% ( 3) 00:07:55.103 18.117 - 18.215: 99.6212% ( 2) 00:07:55.103 18.215 - 18.314: 99.6297% ( 1) 00:07:55.103 18.314 - 18.412: 99.6549% ( 3) 00:07:55.103 18.412 - 18.511: 99.6970% ( 5) 00:07:55.103 18.511 - 18.609: 99.7138% ( 2) 00:07:55.103 18.609 - 18.708: 99.7222% ( 1) 00:07:55.103 18.806 - 18.905: 99.7307% ( 1) 00:07:55.104 18.905 - 19.003: 99.7391% ( 1) 00:07:55.104 19.003 - 19.102: 99.7643% ( 3) 00:07:55.104 19.102 - 19.200: 99.7812% ( 2) 00:07:55.104 19.298 - 19.397: 99.7896% ( 1) 00:07:55.104 19.692 - 19.791: 99.7980% ( 1) 00:07:55.104 19.791 - 19.889: 99.8064% ( 1) 00:07:55.104 19.889 - 19.988: 99.8232% ( 2) 00:07:55.104 20.086 - 20.185: 99.8401% ( 2) 00:07:55.104 20.283 - 20.382: 99.8485% ( 1) 00:07:55.104 20.677 - 20.775: 99.8569% ( 1) 00:07:55.104 20.775 - 20.874: 99.8653% ( 1) 00:07:55.104 21.071 - 21.169: 99.8822% ( 2) 00:07:55.104 21.760 - 21.858: 99.8906% ( 1) 00:07:55.104 22.055 - 22.154: 99.8990% ( 1) 00:07:55.104 22.843 - 22.942: 99.9074% ( 1) 00:07:55.104 22.942 - 23.040: 99.9158% ( 1) 00:07:55.104 23.434 - 23.532: 99.9242% ( 1) 00:07:55.104 27.175 - 27.372: 99.9327% ( 1) 00:07:55.104 27.963 - 28.160: 99.9411% ( 1) 00:07:55.104 29.145 - 29.342: 99.9495% ( 1) 00:07:55.104 33.083 - 33.280: 99.9579% ( 1) 00:07:55.104 38.597 - 38.794: 99.9663% ( 1) 00:07:55.104 40.763 - 40.960: 99.9747% ( 1) 00:07:55.104 63.015 - 63.409: 99.9832% ( 1) 00:07:55.104 93.342 - 93.735: 99.9916% ( 1) 00:07:55.104 313.502 - 315.077: 100.0000% ( 1) 00:07:55.104 00:07:55.104 Complete histogram 00:07:55.104 ================== 00:07:55.104 Range in us Cumulative Count 00:07:55.104 7.237 - 7.286: 0.1431% ( 17) 00:07:55.104 7.286 - 7.335: 1.2457% ( 131) 00:07:55.104 7.335 - 7.385: 6.0517% ( 571) 00:07:55.104 7.385 - 7.434: 15.9667% ( 1178) 00:07:55.104 7.434 - 7.483: 29.5177% ( 1610) 00:07:55.104 7.483 - 7.532: 43.0603% ( 1609) 00:07:55.104 7.532 - 7.582: 55.1974% ( 1442) 00:07:55.104 7.582 - 7.631: 65.0029% ( 1165) 00:07:55.104 7.631 - 7.680: 73.0831% ( 960) 00:07:55.104 7.680 - 7.729: 79.0758% ( 712) 00:07:55.104 7.729 - 7.778: 83.6041% ( 538) 00:07:55.104 7.778 - 7.828: 86.6678% ( 364) 00:07:55.104 7.828 - 7.877: 89.1676% ( 297) 00:07:55.104 7.877 - 7.926: 90.8509% ( 200) 00:07:55.104 7.926 - 7.975: 92.3323% ( 176) 00:07:55.104 7.975 - 8.025: 93.2581% ( 110) 00:07:55.104 8.025 - 8.074: 93.9988% ( 88) 00:07:55.104 8.074 - 8.123: 94.6553% ( 78) 00:07:55.104 8.123 - 8.172: 95.1772% ( 62) 00:07:55.104 8.172 - 8.222: 95.5559% ( 45) 00:07:55.104 8.222 - 8.271: 95.8926% ( 40) 00:07:55.104 8.271 - 8.320: 96.1535% ( 31) 00:07:55.104 8.320 - 8.369: 96.3724% ( 26) 00:07:55.104 8.369 - 8.418: 96.6333% ( 31) 00:07:55.104 8.418 - 8.468: 96.8942% ( 31) 00:07:55.104 8.468 - 8.517: 97.1046% ( 25) 00:07:55.104 8.517 - 8.566: 97.2561% ( 18) 00:07:55.104 8.566 - 8.615: 97.4160% ( 19) 00:07:55.104 8.615 - 8.665: 97.5760% ( 19) 00:07:55.104 8.665 - 8.714: 97.7022% ( 15) 00:07:55.104 8.714 - 8.763: 97.7527% ( 6) 00:07:55.104 8.763 - 8.812: 97.7948% ( 5) 00:07:55.104 8.812 - 8.862: 97.8369% ( 5) 00:07:55.104 8.862 - 8.911: 97.8790% ( 5) 00:07:55.104 8.911 - 8.960: 97.9547% ( 9) 00:07:55.104 8.960 - 9.009: 97.9968% ( 5) 00:07:55.104 9.009 - 9.058: 98.0221% ( 3) 00:07:55.104 9.058 - 9.108: 98.0389% ( 2) 00:07:55.104 9.108 - 9.157: 98.0473% ( 1) 00:07:55.104 9.157 - 9.206: 98.0641% ( 2) 00:07:55.104 9.206 - 9.255: 98.0726% ( 1) 00:07:55.104 9.255 - 9.305: 98.0810% ( 1) 00:07:55.104 9.354 - 9.403: 98.0894% ( 1) 00:07:55.104 9.403 - 9.452: 98.1062% ( 2) 00:07:55.104 9.551 - 9.600: 98.1146% ( 1) 00:07:55.104 9.649 - 9.698: 98.1315% ( 2) 00:07:55.104 9.797 - 9.846: 98.1483% ( 2) 00:07:55.104 9.846 - 9.895: 98.1651% ( 2) 00:07:55.104 9.895 - 9.945: 98.1736% ( 1) 00:07:55.104 10.043 - 10.092: 98.1988% ( 3) 00:07:55.104 10.142 - 10.191: 98.2156% ( 2) 00:07:55.104 10.191 - 10.240: 98.2241% ( 1) 00:07:55.104 10.289 - 10.338: 98.2325% ( 1) 00:07:55.104 10.388 - 10.437: 98.2409% ( 1) 00:07:55.104 10.437 - 10.486: 98.2661% ( 3) 00:07:55.104 10.486 - 10.535: 98.2830% ( 2) 00:07:55.104 10.535 - 10.585: 98.3082% ( 3) 00:07:55.104 10.585 - 10.634: 98.3166% ( 1) 00:07:55.104 10.634 - 10.683: 98.3251% ( 1) 00:07:55.104 10.683 - 10.732: 98.3671% ( 5) 00:07:55.104 10.782 - 10.831: 98.3756% ( 1) 00:07:55.104 10.831 - 10.880: 98.4008% ( 3) 00:07:55.104 10.880 - 10.929: 98.4092% ( 1) 00:07:55.104 10.978 - 11.028: 98.4176% ( 1) 00:07:55.104 11.175 - 11.225: 98.4261% ( 1) 00:07:55.104 11.225 - 11.274: 98.4429% ( 2) 00:07:55.104 11.372 - 11.422: 98.4513% ( 1) 00:07:55.104 11.668 - 11.717: 98.4597% ( 1) 00:07:55.104 11.963 - 12.012: 98.4681% ( 1) 00:07:55.104 12.308 - 12.357: 98.4766% ( 1) 00:07:55.104 12.455 - 12.505: 98.4850% ( 1) 00:07:55.104 12.603 - 12.702: 98.4934% ( 1) 00:07:55.104 12.800 - 12.898: 98.5102% ( 2) 00:07:55.104 12.898 - 12.997: 98.5523% ( 5) 00:07:55.104 12.997 - 13.095: 98.6281% ( 9) 00:07:55.104 13.095 - 13.194: 98.6954% ( 8) 00:07:55.104 13.194 - 13.292: 98.7964% ( 12) 00:07:55.104 13.292 - 13.391: 98.8890% ( 11) 00:07:55.104 13.391 - 13.489: 98.9732% ( 10) 00:07:55.104 13.489 - 13.588: 99.0237% ( 6) 00:07:55.104 13.588 - 13.686: 99.0826% ( 7) 00:07:55.104 13.686 - 13.785: 99.1415% ( 7) 00:07:55.104 13.785 - 13.883: 99.2088% ( 8) 00:07:55.104 13.883 - 13.982: 99.2762% ( 8) 00:07:55.104 13.982 - 14.080: 99.3351% ( 7) 00:07:55.104 14.080 - 14.178: 99.3856% ( 6) 00:07:55.104 14.178 - 14.277: 99.4445% ( 7) 00:07:55.104 14.277 - 14.375: 99.5202% ( 9) 00:07:55.104 14.375 - 14.474: 99.5455% ( 3) 00:07:55.104 14.474 - 14.572: 99.5960% ( 6) 00:07:55.104 14.572 - 14.671: 99.6128% ( 2) 00:07:55.104 14.671 - 14.769: 99.6465% ( 4) 00:07:55.104 14.769 - 14.868: 99.6802% ( 4) 00:07:55.104 14.868 - 14.966: 99.6970% ( 2) 00:07:55.104 14.966 - 15.065: 99.7307% ( 4) 00:07:55.104 15.163 - 15.262: 99.7391% ( 1) 00:07:55.104 15.360 - 15.458: 99.7559% ( 2) 00:07:55.104 15.557 - 15.655: 99.7643% ( 1) 00:07:55.104 15.655 - 15.754: 99.7727% ( 1) 00:07:55.104 15.754 - 15.852: 99.7812% ( 1) 00:07:55.104 15.852 - 15.951: 99.7896% ( 1) 00:07:55.104 16.049 - 16.148: 99.7980% ( 1) 00:07:55.104 16.246 - 16.345: 99.8064% ( 1) 00:07:55.104 16.640 - 16.738: 99.8148% ( 1) 00:07:55.104 17.231 - 17.329: 99.8232% ( 1) 00:07:55.104 17.329 - 17.428: 99.8401% ( 2) 00:07:55.104 17.625 - 17.723: 99.8485% ( 1) 00:07:55.104 17.723 - 17.822: 99.8569% ( 1) 00:07:55.104 17.920 - 18.018: 99.8653% ( 1) 00:07:55.104 18.708 - 18.806: 99.8737% ( 1) 00:07:55.104 19.003 - 19.102: 99.8906% ( 2) 00:07:55.104 19.298 - 19.397: 99.8990% ( 1) 00:07:55.104 19.791 - 19.889: 99.9074% ( 1) 00:07:55.104 19.988 - 20.086: 99.9158% ( 1) 00:07:55.104 22.548 - 22.646: 99.9242% ( 1) 00:07:55.104 23.040 - 23.138: 99.9327% ( 1) 00:07:55.104 23.237 - 23.335: 99.9411% ( 1) 00:07:55.104 25.797 - 25.994: 99.9495% ( 1) 00:07:55.104 25.994 - 26.191: 99.9579% ( 1) 00:07:55.104 29.538 - 29.735: 99.9663% ( 1) 00:07:55.104 35.643 - 35.840: 99.9747% ( 1) 00:07:55.104 47.655 - 47.852: 99.9832% ( 1) 00:07:55.104 48.246 - 48.443: 99.9916% ( 1) 00:07:55.104 96.492 - 96.886: 100.0000% ( 1) 00:07:55.104 00:07:55.104 00:07:55.104 real 0m1.228s 00:07:55.104 user 0m1.077s 00:07:55.104 sys 0m0.093s 00:07:55.104 14:00:56 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.104 14:00:56 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:55.104 ************************************ 00:07:55.104 END TEST nvme_overhead 00:07:55.104 ************************************ 00:07:55.104 14:00:56 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:55.104 14:00:56 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:55.104 14:00:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.104 14:00:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.104 ************************************ 00:07:55.104 START TEST nvme_arbitration 00:07:55.104 ************************************ 00:07:55.104 14:00:56 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:58.385 Initializing NVMe Controllers 00:07:58.385 Attached to 0000:00:10.0 00:07:58.385 Attached to 0000:00:11.0 00:07:58.385 Attached to 0000:00:13.0 00:07:58.385 Attached to 0000:00:12.0 00:07:58.385 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:58.385 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:58.385 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:58.385 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:58.385 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:58.385 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:58.385 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:58.385 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:58.385 Initialization complete. Launching workers. 00:07:58.385 Starting thread on core 1 with urgent priority queue 00:07:58.385 Starting thread on core 2 with urgent priority queue 00:07:58.385 Starting thread on core 3 with urgent priority queue 00:07:58.385 Starting thread on core 0 with urgent priority queue 00:07:58.385 QEMU NVMe Ctrl (12340 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:07:58.385 QEMU NVMe Ctrl (12342 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:07:58.385 QEMU NVMe Ctrl (12341 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:07:58.385 QEMU NVMe Ctrl (12342 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:07:58.385 QEMU NVMe Ctrl (12343 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:07:58.385 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:07:58.385 ======================================================== 00:07:58.385 00:07:58.385 00:07:58.385 real 0m3.308s 00:07:58.385 user 0m9.284s 00:07:58.385 sys 0m0.109s 00:07:58.385 14:00:59 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.385 ************************************ 00:07:58.385 END TEST nvme_arbitration 00:07:58.385 ************************************ 00:07:58.385 14:00:59 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:58.385 14:00:59 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:58.385 14:00:59 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:58.385 14:00:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.385 14:00:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.385 ************************************ 00:07:58.385 START TEST nvme_single_aen 00:07:58.385 ************************************ 00:07:58.385 14:00:59 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:58.644 Asynchronous Event Request test 00:07:58.644 Attached to 0000:00:10.0 00:07:58.644 Attached to 0000:00:11.0 00:07:58.644 Attached to 0000:00:13.0 00:07:58.644 Attached to 0000:00:12.0 00:07:58.644 Reset controller to setup AER completions for this process 00:07:58.644 Registering asynchronous event callbacks... 00:07:58.644 Getting orig temperature thresholds of all controllers 00:07:58.644 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:58.644 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:58.644 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:58.644 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:58.644 Setting all controllers temperature threshold low to trigger AER 00:07:58.644 Waiting for all controllers temperature threshold to be set lower 00:07:58.644 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:58.644 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:58.644 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:58.644 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:58.644 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:58.644 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:58.644 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:58.644 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:58.644 Waiting for all controllers to trigger AER and reset threshold 00:07:58.644 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.644 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.644 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.644 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.644 Cleaning up... 00:07:58.644 00:07:58.644 real 0m0.235s 00:07:58.644 user 0m0.085s 00:07:58.644 sys 0m0.100s 00:07:58.644 14:01:00 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.644 14:01:00 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:58.644 ************************************ 00:07:58.644 END TEST nvme_single_aen 00:07:58.644 ************************************ 00:07:58.644 14:01:00 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:58.644 14:01:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.644 14:01:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.644 14:01:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.644 ************************************ 00:07:58.644 START TEST nvme_doorbell_aers 00:07:58.644 ************************************ 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:58.644 14:01:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:58.902 [2024-12-09 14:01:00.493401] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:08.873 Executing: test_write_invalid_db 00:08:08.873 Waiting for AER completion... 00:08:08.873 Failure: test_write_invalid_db 00:08:08.873 00:08:08.873 Executing: test_invalid_db_write_overflow_sq 00:08:08.873 Waiting for AER completion... 00:08:08.873 Failure: test_invalid_db_write_overflow_sq 00:08:08.873 00:08:08.873 Executing: test_invalid_db_write_overflow_cq 00:08:08.873 Waiting for AER completion... 00:08:08.873 Failure: test_invalid_db_write_overflow_cq 00:08:08.873 00:08:08.873 14:01:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:08.873 14:01:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:08.873 [2024-12-09 14:01:10.522001] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:18.833 Executing: test_write_invalid_db 00:08:18.833 Waiting for AER completion... 00:08:18.833 Failure: test_write_invalid_db 00:08:18.834 00:08:18.834 Executing: test_invalid_db_write_overflow_sq 00:08:18.834 Waiting for AER completion... 00:08:18.834 Failure: test_invalid_db_write_overflow_sq 00:08:18.834 00:08:18.834 Executing: test_invalid_db_write_overflow_cq 00:08:18.834 Waiting for AER completion... 00:08:18.834 Failure: test_invalid_db_write_overflow_cq 00:08:18.834 00:08:18.834 14:01:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:18.834 14:01:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:18.834 [2024-12-09 14:01:20.546886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:28.827 Executing: test_write_invalid_db 00:08:28.827 Waiting for AER completion... 00:08:28.827 Failure: test_write_invalid_db 00:08:28.827 00:08:28.827 Executing: test_invalid_db_write_overflow_sq 00:08:28.827 Waiting for AER completion... 00:08:28.827 Failure: test_invalid_db_write_overflow_sq 00:08:28.827 00:08:28.827 Executing: test_invalid_db_write_overflow_cq 00:08:28.827 Waiting for AER completion... 00:08:28.827 Failure: test_invalid_db_write_overflow_cq 00:08:28.827 00:08:28.827 14:01:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:28.827 14:01:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:28.827 [2024-12-09 14:01:30.577228] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:38.792 Executing: test_write_invalid_db 00:08:38.792 Waiting for AER completion... 00:08:38.792 Failure: test_write_invalid_db 00:08:38.792 00:08:38.792 Executing: test_invalid_db_write_overflow_sq 00:08:38.792 Waiting for AER completion... 00:08:38.792 Failure: test_invalid_db_write_overflow_sq 00:08:38.792 00:08:38.792 Executing: test_invalid_db_write_overflow_cq 00:08:38.792 Waiting for AER completion... 00:08:38.792 Failure: test_invalid_db_write_overflow_cq 00:08:38.792 00:08:38.792 00:08:38.792 real 0m40.173s 00:08:38.792 user 0m34.043s 00:08:38.792 sys 0m5.779s 00:08:38.792 14:01:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.792 ************************************ 00:08:38.792 END TEST nvme_doorbell_aers 00:08:38.792 ************************************ 00:08:38.792 14:01:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:38.792 14:01:40 nvme -- nvme/nvme.sh@97 -- # uname 00:08:38.792 14:01:40 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:38.792 14:01:40 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:38.792 14:01:40 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:38.792 14:01:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.792 14:01:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.792 ************************************ 00:08:38.792 START TEST nvme_multi_aen 00:08:38.792 ************************************ 00:08:38.792 14:01:40 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:39.050 [2024-12-09 14:01:40.614816] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.614879] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.614889] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.616259] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.616300] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.616310] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.617298] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.617325] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.617332] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.618319] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.618345] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 [2024-12-09 14:01:40.618353] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63255) is not found. Dropping the request. 00:08:39.050 Child process pid: 63781 00:08:39.050 [Child] Asynchronous Event Request test 00:08:39.050 [Child] Attached to 0000:00:10.0 00:08:39.050 [Child] Attached to 0000:00:11.0 00:08:39.050 [Child] Attached to 0000:00:13.0 00:08:39.050 [Child] Attached to 0000:00:12.0 00:08:39.050 [Child] Registering asynchronous event callbacks... 00:08:39.050 [Child] Getting orig temperature thresholds of all controllers 00:08:39.050 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.050 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.050 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.050 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.050 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:39.050 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.050 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.050 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.050 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.050 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.050 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.050 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.050 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.050 [Child] Cleaning up... 00:08:39.308 Asynchronous Event Request test 00:08:39.308 Attached to 0000:00:10.0 00:08:39.308 Attached to 0000:00:11.0 00:08:39.308 Attached to 0000:00:13.0 00:08:39.308 Attached to 0000:00:12.0 00:08:39.308 Reset controller to setup AER completions for this process 00:08:39.308 Registering asynchronous event callbacks... 00:08:39.308 Getting orig temperature thresholds of all controllers 00:08:39.308 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.308 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.308 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.308 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.308 Setting all controllers temperature threshold low to trigger AER 00:08:39.308 Waiting for all controllers temperature threshold to be set lower 00:08:39.308 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.308 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:39.308 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.308 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:39.308 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.308 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:39.308 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.308 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:39.308 Waiting for all controllers to trigger AER and reset threshold 00:08:39.308 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.308 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.308 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.308 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.308 Cleaning up... 00:08:39.308 00:08:39.308 real 0m0.418s 00:08:39.308 user 0m0.142s 00:08:39.308 sys 0m0.174s 00:08:39.308 14:01:40 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.308 14:01:40 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:39.308 ************************************ 00:08:39.308 END TEST nvme_multi_aen 00:08:39.308 ************************************ 00:08:39.308 14:01:40 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:39.308 14:01:40 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:39.308 14:01:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.308 14:01:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.308 ************************************ 00:08:39.308 START TEST nvme_startup 00:08:39.308 ************************************ 00:08:39.308 14:01:40 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:39.308 Initializing NVMe Controllers 00:08:39.308 Attached to 0000:00:10.0 00:08:39.308 Attached to 0000:00:11.0 00:08:39.308 Attached to 0000:00:13.0 00:08:39.308 Attached to 0000:00:12.0 00:08:39.308 Initialization complete. 00:08:39.308 Time used:146010.156 (us). 00:08:39.565 00:08:39.565 real 0m0.206s 00:08:39.565 user 0m0.077s 00:08:39.565 sys 0m0.078s 00:08:39.565 14:01:41 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.565 14:01:41 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:39.565 ************************************ 00:08:39.565 END TEST nvme_startup 00:08:39.565 ************************************ 00:08:39.565 14:01:41 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:39.565 14:01:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.565 14:01:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.565 14:01:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.565 ************************************ 00:08:39.565 START TEST nvme_multi_secondary 00:08:39.565 ************************************ 00:08:39.565 14:01:41 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:39.565 14:01:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63831 00:08:39.565 14:01:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63832 00:08:39.565 14:01:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:39.565 14:01:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:39.565 14:01:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:42.844 Initializing NVMe Controllers 00:08:42.844 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:42.844 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:42.844 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:42.844 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:42.844 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:42.844 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:42.844 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:42.844 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:42.844 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:42.844 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:42.844 Initialization complete. Launching workers. 00:08:42.844 ======================================================== 00:08:42.844 Latency(us) 00:08:42.844 Device Information : IOPS MiB/s Average min max 00:08:42.844 PCIE (0000:00:10.0) NSID 1 from core 1: 7665.57 29.94 2085.85 1028.13 6707.20 00:08:42.844 PCIE (0000:00:11.0) NSID 1 from core 1: 7665.57 29.94 2086.84 1050.05 6461.17 00:08:42.844 PCIE (0000:00:13.0) NSID 1 from core 1: 7665.57 29.94 2086.86 997.61 6523.72 00:08:42.844 PCIE (0000:00:12.0) NSID 1 from core 1: 7665.57 29.94 2086.81 1068.37 5896.40 00:08:42.844 PCIE (0000:00:12.0) NSID 2 from core 1: 7665.57 29.94 2086.77 1042.74 5837.26 00:08:42.844 PCIE (0000:00:12.0) NSID 3 from core 1: 7665.57 29.94 2086.73 970.38 6731.44 00:08:42.844 ======================================================== 00:08:42.844 Total : 45993.40 179.66 2086.64 970.38 6731.44 00:08:42.844 00:08:42.844 Initializing NVMe Controllers 00:08:42.844 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:42.844 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:42.844 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:42.844 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:42.844 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:42.844 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:42.844 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:42.844 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:42.844 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:42.844 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:42.844 Initialization complete. Launching workers. 00:08:42.844 ======================================================== 00:08:42.844 Latency(us) 00:08:42.844 Device Information : IOPS MiB/s Average min max 00:08:42.844 PCIE (0000:00:10.0) NSID 1 from core 2: 3305.58 12.91 4838.61 1168.10 11954.89 00:08:42.844 PCIE (0000:00:11.0) NSID 1 from core 2: 3305.58 12.91 4839.80 1205.12 14477.97 00:08:42.844 PCIE (0000:00:13.0) NSID 1 from core 2: 3305.58 12.91 4839.36 1250.50 14459.68 00:08:42.844 PCIE (0000:00:12.0) NSID 1 from core 2: 3305.58 12.91 4839.75 1108.32 11442.68 00:08:42.844 PCIE (0000:00:12.0) NSID 2 from core 2: 3305.58 12.91 4839.70 1335.25 11711.54 00:08:42.844 PCIE (0000:00:12.0) NSID 3 from core 2: 3305.58 12.91 4839.77 1142.27 13106.38 00:08:42.844 ======================================================== 00:08:42.844 Total : 19833.48 77.47 4839.50 1108.32 14477.97 00:08:42.844 00:08:42.844 14:01:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63831 00:08:44.749 Initializing NVMe Controllers 00:08:44.749 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.749 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.749 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.749 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.749 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:44.749 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:44.749 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:44.749 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:44.749 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:44.749 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:44.749 Initialization complete. Launching workers. 00:08:44.749 ======================================================== 00:08:44.749 Latency(us) 00:08:44.749 Device Information : IOPS MiB/s Average min max 00:08:44.749 PCIE (0000:00:10.0) NSID 1 from core 0: 10853.83 42.40 1472.92 687.69 4975.11 00:08:44.749 PCIE (0000:00:11.0) NSID 1 from core 0: 10853.83 42.40 1473.84 710.22 5159.29 00:08:44.749 PCIE (0000:00:13.0) NSID 1 from core 0: 10853.83 42.40 1473.88 710.26 5110.41 00:08:44.749 PCIE (0000:00:12.0) NSID 1 from core 0: 10853.83 42.40 1473.93 704.51 5073.23 00:08:44.749 PCIE (0000:00:12.0) NSID 2 from core 0: 10853.83 42.40 1473.98 707.33 5101.10 00:08:44.749 PCIE (0000:00:12.0) NSID 3 from core 0: 10853.83 42.40 1474.02 716.56 4631.33 00:08:44.749 ======================================================== 00:08:44.749 Total : 65122.96 254.39 1473.76 687.69 5159.29 00:08:44.749 00:08:44.749 14:01:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63832 00:08:44.749 14:01:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63901 00:08:44.749 14:01:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:44.749 14:01:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63902 00:08:44.749 14:01:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:44.749 14:01:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:48.028 Initializing NVMe Controllers 00:08:48.028 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.028 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.028 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.028 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.028 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:48.028 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:48.028 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:48.028 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:48.028 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:48.028 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:48.028 Initialization complete. Launching workers. 00:08:48.028 ======================================================== 00:08:48.028 Latency(us) 00:08:48.028 Device Information : IOPS MiB/s Average min max 00:08:48.028 PCIE (0000:00:10.0) NSID 1 from core 0: 7834.57 30.60 2040.85 736.13 6627.93 00:08:48.028 PCIE (0000:00:11.0) NSID 1 from core 0: 7834.57 30.60 2041.83 754.41 6521.89 00:08:48.028 PCIE (0000:00:13.0) NSID 1 from core 0: 7834.57 30.60 2041.79 702.47 5999.59 00:08:48.028 PCIE (0000:00:12.0) NSID 1 from core 0: 7834.57 30.60 2041.85 748.50 6056.75 00:08:48.028 PCIE (0000:00:12.0) NSID 2 from core 0: 7834.57 30.60 2041.82 759.29 6356.43 00:08:48.028 PCIE (0000:00:12.0) NSID 3 from core 0: 7834.57 30.60 2041.79 756.41 6520.64 00:08:48.028 ======================================================== 00:08:48.028 Total : 47007.44 183.62 2041.65 702.47 6627.93 00:08:48.028 00:08:48.028 Initializing NVMe Controllers 00:08:48.028 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.028 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.028 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.028 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.028 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:48.028 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:48.028 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:48.028 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:48.028 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:48.028 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:48.028 Initialization complete. Launching workers. 00:08:48.028 ======================================================== 00:08:48.028 Latency(us) 00:08:48.028 Device Information : IOPS MiB/s Average min max 00:08:48.028 PCIE (0000:00:10.0) NSID 1 from core 1: 7779.96 30.39 2055.17 714.26 5427.29 00:08:48.028 PCIE (0000:00:11.0) NSID 1 from core 1: 7779.96 30.39 2056.15 728.97 5577.41 00:08:48.028 PCIE (0000:00:13.0) NSID 1 from core 1: 7779.96 30.39 2056.10 741.03 5791.44 00:08:48.028 PCIE (0000:00:12.0) NSID 1 from core 1: 7779.96 30.39 2056.13 743.32 5486.81 00:08:48.028 PCIE (0000:00:12.0) NSID 2 from core 1: 7779.96 30.39 2056.10 735.35 5240.31 00:08:48.028 PCIE (0000:00:12.0) NSID 3 from core 1: 7779.96 30.39 2056.07 733.77 5599.27 00:08:48.028 ======================================================== 00:08:48.028 Total : 46679.78 182.34 2055.95 714.26 5791.44 00:08:48.028 00:08:50.561 Initializing NVMe Controllers 00:08:50.561 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.561 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.561 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.561 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.561 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:50.561 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:50.561 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:50.561 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:50.561 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:50.561 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:50.561 Initialization complete. Launching workers. 00:08:50.561 ======================================================== 00:08:50.561 Latency(us) 00:08:50.561 Device Information : IOPS MiB/s Average min max 00:08:50.561 PCIE (0000:00:10.0) NSID 1 from core 2: 4720.18 18.44 3387.30 747.52 11763.97 00:08:50.561 PCIE (0000:00:11.0) NSID 1 from core 2: 4720.18 18.44 3389.12 725.44 12647.79 00:08:50.561 PCIE (0000:00:13.0) NSID 1 from core 2: 4720.18 18.44 3388.89 750.38 12583.23 00:08:50.561 PCIE (0000:00:12.0) NSID 1 from core 2: 4720.18 18.44 3388.83 752.39 12431.53 00:08:50.561 PCIE (0000:00:12.0) NSID 2 from core 2: 4720.18 18.44 3389.12 749.53 12614.63 00:08:50.561 PCIE (0000:00:12.0) NSID 3 from core 2: 4720.18 18.44 3388.05 724.29 12251.30 00:08:50.561 ======================================================== 00:08:50.561 Total : 28321.10 110.63 3388.55 724.29 12647.79 00:08:50.561 00:08:50.561 ************************************ 00:08:50.561 END TEST nvme_multi_secondary 00:08:50.561 ************************************ 00:08:50.561 14:01:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63901 00:08:50.561 14:01:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63902 00:08:50.561 00:08:50.561 real 0m10.668s 00:08:50.561 user 0m18.397s 00:08:50.561 sys 0m0.614s 00:08:50.561 14:01:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.561 14:01:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:50.561 14:01:51 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:50.561 14:01:51 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:50.561 14:01:51 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62852 ]] 00:08:50.561 14:01:51 nvme -- common/autotest_common.sh@1094 -- # kill 62852 00:08:50.561 14:01:51 nvme -- common/autotest_common.sh@1095 -- # wait 62852 00:08:50.561 [2024-12-09 14:01:51.849938] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.850045] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.850090] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.850119] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.852249] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.852298] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.852315] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.852331] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.854298] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.854342] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.854357] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.854371] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.856342] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.856390] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.856404] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 [2024-12-09 14:01:51.856420] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63780) is not found. Dropping the request. 00:08:50.561 14:01:51 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:50.561 14:01:51 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:50.561 14:01:51 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:50.561 14:01:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.561 14:01:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.561 14:01:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.561 ************************************ 00:08:50.561 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:50.561 ************************************ 00:08:50.561 14:01:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:50.561 * Looking for test storage... 00:08:50.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.561 --rc genhtml_branch_coverage=1 00:08:50.561 --rc genhtml_function_coverage=1 00:08:50.561 --rc genhtml_legend=1 00:08:50.561 --rc geninfo_all_blocks=1 00:08:50.561 --rc geninfo_unexecuted_blocks=1 00:08:50.561 00:08:50.561 ' 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.561 --rc genhtml_branch_coverage=1 00:08:50.561 --rc genhtml_function_coverage=1 00:08:50.561 --rc genhtml_legend=1 00:08:50.561 --rc geninfo_all_blocks=1 00:08:50.561 --rc geninfo_unexecuted_blocks=1 00:08:50.561 00:08:50.561 ' 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.561 --rc genhtml_branch_coverage=1 00:08:50.561 --rc genhtml_function_coverage=1 00:08:50.561 --rc genhtml_legend=1 00:08:50.561 --rc geninfo_all_blocks=1 00:08:50.561 --rc geninfo_unexecuted_blocks=1 00:08:50.561 00:08:50.561 ' 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.561 --rc genhtml_branch_coverage=1 00:08:50.561 --rc genhtml_function_coverage=1 00:08:50.561 --rc genhtml_legend=1 00:08:50.561 --rc geninfo_all_blocks=1 00:08:50.561 --rc geninfo_unexecuted_blocks=1 00:08:50.561 00:08:50.561 ' 00:08:50.561 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:50.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64064 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64064 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64064 ']' 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.562 14:01:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.562 [2024-12-09 14:01:52.273608] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:08:50.562 [2024-12-09 14:01:52.273884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64064 ] 00:08:50.819 [2024-12-09 14:01:52.442107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.819 [2024-12-09 14:01:52.545240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.819 [2024-12-09 14:01:52.545292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.819 [2024-12-09 14:01:52.545393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.819 [2024-12-09 14:01:52.545605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.385 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.386 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:51.386 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:51.386 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.386 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:51.644 nvme0n1 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_VtoFK.txt 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:51.644 true 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733752913 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64087 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:51.644 14:01:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.548 [2024-12-09 14:01:55.229790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:53.548 [2024-12-09 14:01:55.230147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:53.548 [2024-12-09 14:01:55.230177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:53.548 [2024-12-09 14:01:55.230190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.548 [2024-12-09 14:01:55.231794] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:53.548 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64087 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64087 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64087 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_VtoFK.txt 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:53.548 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_VtoFK.txt 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64064 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64064 ']' 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64064 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.549 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64064 00:08:53.809 killing process with pid 64064 00:08:53.809 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.809 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.810 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64064' 00:08:53.810 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64064 00:08:53.810 14:01:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64064 00:08:55.180 14:01:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:55.180 14:01:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:55.180 00:08:55.180 real 0m4.739s 00:08:55.180 user 0m16.951s 00:08:55.180 sys 0m0.495s 00:08:55.180 ************************************ 00:08:55.180 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:55.180 ************************************ 00:08:55.180 14:01:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.180 14:01:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.180 14:01:56 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:55.180 14:01:56 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:55.180 14:01:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.180 14:01:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.180 14:01:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.180 ************************************ 00:08:55.180 START TEST nvme_fio 00:08:55.180 ************************************ 00:08:55.180 14:01:56 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:55.180 14:01:56 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:55.180 14:01:56 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:55.180 14:01:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:55.180 14:01:56 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:55.180 14:01:56 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:55.180 14:01:56 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:55.180 14:01:56 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:55.180 14:01:56 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:55.180 14:01:56 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:55.180 14:01:56 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:55.180 14:01:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:55.180 14:01:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:55.180 14:01:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:55.180 14:01:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:55.180 14:01:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:55.438 14:01:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:55.438 14:01:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:55.697 14:01:57 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:55.697 14:01:57 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:55.697 14:01:57 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:55.697 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:55.697 fio-3.35 00:08:55.697 Starting 1 thread 00:09:02.247 00:09:02.247 test: (groupid=0, jobs=1): err= 0: pid=64231: Mon Dec 9 14:02:02 2024 00:09:02.247 read: IOPS=24.4k, BW=95.4MiB/s (100MB/s)(191MiB/2001msec) 00:09:02.247 slat (nsec): min=3424, max=55340, avg=4850.52, stdev=1729.84 00:09:02.247 clat (usec): min=551, max=13130, avg=2604.07, stdev=663.23 00:09:02.247 lat (usec): min=556, max=13185, avg=2608.92, stdev=664.29 00:09:02.247 clat percentiles (usec): 00:09:02.247 | 1.00th=[ 1729], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:02.247 | 30.00th=[ 2409], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:02.247 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2900], 95.00th=[ 3851], 00:09:02.247 | 99.00th=[ 5538], 99.50th=[ 6128], 99.90th=[ 8586], 99.95th=[ 8979], 00:09:02.247 | 99.99th=[12780] 00:09:02.247 bw ( KiB/s): min=96040, max=99057, per=99.78%, avg=97472.33, stdev=1514.26, samples=3 00:09:02.247 iops : min=24010, max=24764, avg=24368.00, stdev=378.43, samples=3 00:09:02.247 write: IOPS=24.3k, BW=94.8MiB/s (99.4MB/s)(190MiB/2001msec); 0 zone resets 00:09:02.247 slat (nsec): min=3555, max=88707, avg=5124.94, stdev=1754.09 00:09:02.247 clat (usec): min=527, max=12956, avg=2632.75, stdev=730.15 00:09:02.247 lat (usec): min=532, max=12969, avg=2637.87, stdev=731.07 00:09:02.247 clat percentiles (usec): 00:09:02.247 | 1.00th=[ 1745], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:02.247 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:09:02.247 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 2999], 95.00th=[ 3884], 00:09:02.247 | 99.00th=[ 5669], 99.50th=[ 6521], 99.90th=[10945], 99.95th=[11207], 00:09:02.247 | 99.99th=[12387] 00:09:02.247 bw ( KiB/s): min=96512, max=98834, per=100.00%, avg=97456.67, stdev=1219.97, samples=3 00:09:02.247 iops : min=24128, max=24708, avg=24364.00, stdev=304.71, samples=3 00:09:02.247 lat (usec) : 750=0.01%, 1000=0.03% 00:09:02.247 lat (msec) : 2=2.50%, 4=92.83%, 10=4.53%, 20=0.09% 00:09:02.247 cpu : usr=99.35%, sys=0.00%, ctx=2, majf=0, minf=607 00:09:02.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:02.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.247 issued rwts: total=48870,48558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.247 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.247 00:09:02.247 Run status group 0 (all jobs): 00:09:02.248 READ: bw=95.4MiB/s (100MB/s), 95.4MiB/s-95.4MiB/s (100MB/s-100MB/s), io=191MiB (200MB), run=2001-2001msec 00:09:02.248 WRITE: bw=94.8MiB/s (99.4MB/s), 94.8MiB/s-94.8MiB/s (99.4MB/s-99.4MB/s), io=190MiB (199MB), run=2001-2001msec 00:09:02.248 ----------------------------------------------------- 00:09:02.248 Suppressions used: 00:09:02.248 count bytes template 00:09:02.248 1 32 /usr/src/fio/parse.c 00:09:02.248 1 8 libtcmalloc_minimal.so 00:09:02.248 ----------------------------------------------------- 00:09:02.248 00:09:02.248 14:02:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:02.248 14:02:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:02.248 14:02:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:02.248 14:02:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:02.248 14:02:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:02.248 14:02:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:02.248 14:02:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:02.248 14:02:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:02.248 14:02:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:02.248 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:02.248 fio-3.35 00:09:02.248 Starting 1 thread 00:09:05.528 00:09:05.528 test: (groupid=0, jobs=1): err= 0: pid=64289: Mon Dec 9 14:02:07 2024 00:09:05.528 read: IOPS=17.5k, BW=68.2MiB/s (71.5MB/s)(138MiB/2018msec) 00:09:05.528 slat (nsec): min=3375, max=73846, avg=5211.71, stdev=2553.19 00:09:05.528 clat (usec): min=667, max=33636, avg=2568.24, stdev=1257.91 00:09:05.528 lat (usec): min=671, max=33639, avg=2573.45, stdev=1258.52 00:09:05.528 clat percentiles (usec): 00:09:05.528 | 1.00th=[ 1172], 5.00th=[ 1319], 10.00th=[ 1483], 20.00th=[ 1893], 00:09:05.528 | 30.00th=[ 2245], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:05.528 | 70.00th=[ 2606], 80.00th=[ 2868], 90.00th=[ 3589], 95.00th=[ 4293], 00:09:05.528 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[12387], 99.95th=[29492], 00:09:05.528 | 99.99th=[33424] 00:09:05.528 bw ( KiB/s): min=46696, max=96608, per=100.00%, avg=70396.00, stdev=21428.12, samples=4 00:09:05.528 iops : min=11674, max=24152, avg=17599.00, stdev=5357.03, samples=4 00:09:05.528 write: IOPS=17.5k, BW=68.2MiB/s (71.6MB/s)(138MiB/2018msec); 0 zone resets 00:09:05.528 slat (nsec): min=3497, max=62992, avg=5530.85, stdev=2473.92 00:09:05.528 clat (usec): min=883, max=40563, avg=4732.32, stdev=4226.22 00:09:05.528 lat (usec): min=887, max=40567, avg=4737.85, stdev=4226.47 00:09:05.528 clat percentiles (usec): 00:09:05.528 | 1.00th=[ 1270], 5.00th=[ 1582], 10.00th=[ 1958], 20.00th=[ 2376], 00:09:05.528 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2868], 00:09:05.528 | 70.00th=[ 3949], 80.00th=[ 8455], 90.00th=[11338], 95.00th=[13173], 00:09:05.528 | 99.00th=[17433], 99.50th=[19792], 99.90th=[37487], 99.95th=[39060], 00:09:05.528 | 99.99th=[40633] 00:09:05.528 bw ( KiB/s): min=46624, max=96424, per=100.00%, avg=70326.00, stdev=21210.62, samples=4 00:09:05.528 iops : min=11656, max=24106, avg=17581.50, stdev=5302.66, samples=4 00:09:05.528 lat (usec) : 750=0.01%, 1000=0.08% 00:09:05.528 lat (msec) : 2=16.63%, 4=65.07%, 10=10.69%, 20=7.24%, 50=0.29% 00:09:05.528 cpu : usr=99.21%, sys=0.10%, ctx=4, majf=0, minf=608 00:09:05.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:05.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.528 issued rwts: total=35231,35258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.528 00:09:05.528 Run status group 0 (all jobs): 00:09:05.528 READ: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=138MiB (144MB), run=2018-2018msec 00:09:05.528 WRITE: bw=68.2MiB/s (71.6MB/s), 68.2MiB/s-68.2MiB/s (71.6MB/s-71.6MB/s), io=138MiB (144MB), run=2018-2018msec 00:09:05.786 ----------------------------------------------------- 00:09:05.786 Suppressions used: 00:09:05.786 count bytes template 00:09:05.786 1 32 /usr/src/fio/parse.c 00:09:05.786 1 8 libtcmalloc_minimal.so 00:09:05.786 ----------------------------------------------------- 00:09:05.786 00:09:05.786 14:02:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:05.786 14:02:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:05.786 14:02:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:05.786 14:02:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:06.043 14:02:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:06.043 14:02:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:06.301 14:02:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:06.301 14:02:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:06.301 14:02:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:06.301 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:06.301 fio-3.35 00:09:06.301 Starting 1 thread 00:09:14.484 00:09:14.484 test: (groupid=0, jobs=1): err= 0: pid=64348: Mon Dec 9 14:02:15 2024 00:09:14.484 read: IOPS=22.0k, BW=85.8MiB/s (89.9MB/s)(172MiB/2001msec) 00:09:14.484 slat (nsec): min=4207, max=56711, avg=5209.78, stdev=2319.23 00:09:14.484 clat (usec): min=222, max=8978, avg=2909.62, stdev=881.37 00:09:14.484 lat (usec): min=226, max=9013, avg=2914.83, stdev=882.67 00:09:14.484 clat percentiles (usec): 00:09:14.484 | 1.00th=[ 2024], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2474], 00:09:14.484 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:14.484 | 70.00th=[ 2737], 80.00th=[ 3064], 90.00th=[ 3818], 95.00th=[ 5145], 00:09:14.484 | 99.00th=[ 6390], 99.50th=[ 6980], 99.90th=[ 7701], 99.95th=[ 7898], 00:09:14.484 | 99.99th=[ 8717] 00:09:14.484 bw ( KiB/s): min=87728, max=90168, per=100.00%, avg=88985.67, stdev=1221.74, samples=3 00:09:14.484 iops : min=21932, max=22542, avg=22246.33, stdev=305.43, samples=3 00:09:14.484 write: IOPS=21.8k, BW=85.2MiB/s (89.3MB/s)(170MiB/2001msec); 0 zone resets 00:09:14.484 slat (nsec): min=4325, max=97865, avg=5518.19, stdev=2441.31 00:09:14.484 clat (usec): min=255, max=8819, avg=2921.01, stdev=886.32 00:09:14.484 lat (usec): min=259, max=8827, avg=2926.53, stdev=887.65 00:09:14.484 clat percentiles (usec): 00:09:14.484 | 1.00th=[ 2040], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2474], 00:09:14.484 | 30.00th=[ 2540], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2638], 00:09:14.484 | 70.00th=[ 2737], 80.00th=[ 3097], 90.00th=[ 3884], 95.00th=[ 5211], 00:09:14.484 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 7701], 99.95th=[ 7898], 00:09:14.484 | 99.99th=[ 8455] 00:09:14.484 bw ( KiB/s): min=88638, max=89864, per=100.00%, avg=89167.33, stdev=629.90, samples=3 00:09:14.484 iops : min=22159, max=22466, avg=22291.67, stdev=157.68, samples=3 00:09:14.484 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:09:14.484 lat (msec) : 2=0.83%, 4=90.00%, 10=9.11% 00:09:14.484 cpu : usr=99.15%, sys=0.05%, ctx=3, majf=0, minf=607 00:09:14.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:14.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.484 issued rwts: total=43927,43632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.484 00:09:14.484 Run status group 0 (all jobs): 00:09:14.484 READ: bw=85.8MiB/s (89.9MB/s), 85.8MiB/s-85.8MiB/s (89.9MB/s-89.9MB/s), io=172MiB (180MB), run=2001-2001msec 00:09:14.484 WRITE: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:09:14.484 ----------------------------------------------------- 00:09:14.484 Suppressions used: 00:09:14.484 count bytes template 00:09:14.484 1 32 /usr/src/fio/parse.c 00:09:14.484 1 8 libtcmalloc_minimal.so 00:09:14.484 ----------------------------------------------------- 00:09:14.484 00:09:14.484 14:02:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:14.484 14:02:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:14.484 14:02:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:14.484 14:02:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:14.743 14:02:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:14.743 14:02:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:14.743 14:02:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:14.743 14:02:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:14.743 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:15.001 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:15.001 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:15.001 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:15.001 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:15.001 14:02:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:15.001 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:15.001 fio-3.35 00:09:15.001 Starting 1 thread 00:09:24.967 00:09:24.967 test: (groupid=0, jobs=1): err= 0: pid=64415: Mon Dec 9 14:02:25 2024 00:09:24.967 read: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(181MiB/2001msec) 00:09:24.967 slat (nsec): min=3346, max=86265, avg=5027.70, stdev=2264.53 00:09:24.967 clat (usec): min=206, max=8805, avg=2765.82, stdev=831.40 00:09:24.967 lat (usec): min=210, max=8891, avg=2770.85, stdev=832.82 00:09:24.967 clat percentiles (usec): 00:09:24.967 | 1.00th=[ 1598], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:24.967 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:09:24.967 | 70.00th=[ 2606], 80.00th=[ 2737], 90.00th=[ 3621], 95.00th=[ 4817], 00:09:24.967 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7570], 00:09:24.967 | 99.99th=[ 8586] 00:09:24.967 bw ( KiB/s): min=87880, max=95512, per=98.66%, avg=91152.00, stdev=3930.61, samples=3 00:09:24.967 iops : min=21970, max=23878, avg=22788.00, stdev=982.65, samples=3 00:09:24.967 write: IOPS=23.0k, BW=89.7MiB/s (94.1MB/s)(180MiB/2001msec); 0 zone resets 00:09:24.967 slat (usec): min=3, max=209, avg= 5.32, stdev= 2.45 00:09:24.967 clat (usec): min=221, max=8659, avg=2769.03, stdev=834.82 00:09:24.967 lat (usec): min=226, max=8672, avg=2774.35, stdev=836.25 00:09:24.967 clat percentiles (usec): 00:09:24.967 | 1.00th=[ 1614], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:24.967 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:09:24.967 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 3654], 95.00th=[ 4883], 00:09:24.967 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 7635], 00:09:24.967 | 99.99th=[ 8356] 00:09:24.967 bw ( KiB/s): min=87448, max=96576, per=99.41%, avg=91322.67, stdev=4717.59, samples=3 00:09:24.967 iops : min=21862, max=24144, avg=22830.67, stdev=1179.40, samples=3 00:09:24.967 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.08% 00:09:24.967 lat (msec) : 2=2.65%, 4=88.78%, 10=8.46% 00:09:24.967 cpu : usr=99.15%, sys=0.15%, ctx=4, majf=0, minf=606 00:09:24.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:24.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.967 issued rwts: total=46218,45956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.967 00:09:24.967 Run status group 0 (all jobs): 00:09:24.967 READ: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=181MiB (189MB), run=2001-2001msec 00:09:24.967 WRITE: bw=89.7MiB/s (94.1MB/s), 89.7MiB/s-89.7MiB/s (94.1MB/s-94.1MB/s), io=180MiB (188MB), run=2001-2001msec 00:09:24.967 ----------------------------------------------------- 00:09:24.967 Suppressions used: 00:09:24.967 count bytes template 00:09:24.967 1 32 /usr/src/fio/parse.c 00:09:24.967 1 8 libtcmalloc_minimal.so 00:09:24.967 ----------------------------------------------------- 00:09:24.967 00:09:24.967 14:02:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:24.967 14:02:25 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:24.967 00:09:24.967 real 0m29.135s 00:09:24.967 user 0m24.181s 00:09:24.967 sys 0m5.773s 00:09:24.967 ************************************ 00:09:24.967 14:02:25 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.967 14:02:25 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:24.967 END TEST nvme_fio 00:09:24.967 ************************************ 00:09:24.967 00:09:24.967 real 1m39.137s 00:09:24.967 user 3m46.805s 00:09:24.967 sys 0m16.211s 00:09:24.967 14:02:25 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.967 14:02:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.967 ************************************ 00:09:24.967 END TEST nvme 00:09:24.967 ************************************ 00:09:24.967 14:02:25 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:24.967 14:02:25 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.967 14:02:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.967 14:02:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.967 14:02:25 -- common/autotest_common.sh@10 -- # set +x 00:09:24.967 ************************************ 00:09:24.967 START TEST nvme_scc 00:09:24.967 ************************************ 00:09:24.967 14:02:25 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.967 * Looking for test storage... 00:09:24.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.967 14:02:26 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.967 14:02:26 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.967 14:02:26 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.967 14:02:26 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:24.968 14:02:26 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.968 14:02:26 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.968 --rc genhtml_branch_coverage=1 00:09:24.968 --rc genhtml_function_coverage=1 00:09:24.968 --rc genhtml_legend=1 00:09:24.968 --rc geninfo_all_blocks=1 00:09:24.968 --rc geninfo_unexecuted_blocks=1 00:09:24.968 00:09:24.968 ' 00:09:24.968 14:02:26 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.968 --rc genhtml_branch_coverage=1 00:09:24.968 --rc genhtml_function_coverage=1 00:09:24.968 --rc genhtml_legend=1 00:09:24.968 --rc geninfo_all_blocks=1 00:09:24.968 --rc geninfo_unexecuted_blocks=1 00:09:24.968 00:09:24.968 ' 00:09:24.968 14:02:26 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.968 --rc genhtml_branch_coverage=1 00:09:24.968 --rc genhtml_function_coverage=1 00:09:24.968 --rc genhtml_legend=1 00:09:24.968 --rc geninfo_all_blocks=1 00:09:24.968 --rc geninfo_unexecuted_blocks=1 00:09:24.968 00:09:24.968 ' 00:09:24.968 14:02:26 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.968 --rc genhtml_branch_coverage=1 00:09:24.968 --rc genhtml_function_coverage=1 00:09:24.968 --rc genhtml_legend=1 00:09:24.968 --rc geninfo_all_blocks=1 00:09:24.968 --rc geninfo_unexecuted_blocks=1 00:09:24.968 00:09:24.968 ' 00:09:24.968 14:02:26 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.968 14:02:26 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.968 14:02:26 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.968 14:02:26 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.968 14:02:26 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.968 14:02:26 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:24.968 14:02:26 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:24.968 14:02:26 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:24.968 14:02:26 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.968 14:02:26 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:24.968 14:02:26 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:24.968 14:02:26 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:24.968 14:02:26 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.968 Waiting for block devices as requested 00:09:24.968 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.968 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.968 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.227 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.563 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:30.563 14:02:31 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:30.563 14:02:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.563 14:02:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:30.563 14:02:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.563 14:02:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.563 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:30.564 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:30.565 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.566 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:30.567 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.568 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:30.569 14:02:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.569 14:02:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:30.569 14:02:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.569 14:02:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.569 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:30.570 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.571 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.572 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:30.573 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.574 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:30.575 14:02:32 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.575 14:02:32 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:30.575 14:02:32 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.575 14:02:32 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.575 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:30.576 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.577 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:30.578 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.579 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.580 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.581 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.582 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.583 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:30.584 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.585 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.586 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:30.587 14:02:32 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.587 14:02:32 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:30.587 14:02:32 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.587 14:02:32 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.587 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:30.588 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.589 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:30.590 14:02:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.590 14:02:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:30.848 14:02:32 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:30.848 14:02:32 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:30.848 14:02:32 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:30.848 14:02:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:30.848 14:02:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:30.848 14:02:32 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:31.105 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.670 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.670 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.670 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.670 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.670 14:02:33 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:31.670 14:02:33 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:31.670 14:02:33 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.670 14:02:33 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:31.670 ************************************ 00:09:31.670 START TEST nvme_simple_copy 00:09:31.670 ************************************ 00:09:31.670 14:02:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:31.928 Initializing NVMe Controllers 00:09:31.928 Attaching to 0000:00:10.0 00:09:31.928 Controller supports SCC. Attached to 0000:00:10.0 00:09:31.928 Namespace ID: 1 size: 6GB 00:09:31.928 Initialization complete. 00:09:31.928 00:09:31.928 Controller QEMU NVMe Ctrl (12340 ) 00:09:31.928 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:31.928 Namespace Block Size:4096 00:09:31.928 Writing LBAs 0 to 63 with Random Data 00:09:31.928 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:31.928 LBAs matching Written Data: 64 00:09:31.928 00:09:31.928 real 0m0.248s 00:09:31.928 user 0m0.094s 00:09:31.928 sys 0m0.053s 00:09:31.928 14:02:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.928 ************************************ 00:09:31.928 END TEST nvme_simple_copy 00:09:31.928 ************************************ 00:09:31.928 14:02:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:31.928 ************************************ 00:09:31.928 END TEST nvme_scc 00:09:31.928 ************************************ 00:09:31.928 00:09:31.928 real 0m7.639s 00:09:31.928 user 0m1.150s 00:09:31.928 sys 0m1.336s 00:09:31.928 14:02:33 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.928 14:02:33 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:31.928 14:02:33 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:31.928 14:02:33 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:31.928 14:02:33 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:31.928 14:02:33 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:31.928 14:02:33 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:31.928 14:02:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.928 14:02:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.928 14:02:33 -- common/autotest_common.sh@10 -- # set +x 00:09:31.928 ************************************ 00:09:31.928 START TEST nvme_fdp 00:09:31.928 ************************************ 00:09:31.928 14:02:33 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:31.928 * Looking for test storage... 00:09:32.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:32.185 14:02:33 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:32.185 14:02:33 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:32.185 14:02:33 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.185 14:02:33 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:32.185 14:02:33 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:32.186 14:02:33 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.186 14:02:33 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.186 --rc genhtml_branch_coverage=1 00:09:32.186 --rc genhtml_function_coverage=1 00:09:32.186 --rc genhtml_legend=1 00:09:32.186 --rc geninfo_all_blocks=1 00:09:32.186 --rc geninfo_unexecuted_blocks=1 00:09:32.186 00:09:32.186 ' 00:09:32.186 14:02:33 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.186 --rc genhtml_branch_coverage=1 00:09:32.186 --rc genhtml_function_coverage=1 00:09:32.186 --rc genhtml_legend=1 00:09:32.186 --rc geninfo_all_blocks=1 00:09:32.186 --rc geninfo_unexecuted_blocks=1 00:09:32.186 00:09:32.186 ' 00:09:32.186 14:02:33 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.186 --rc genhtml_branch_coverage=1 00:09:32.186 --rc genhtml_function_coverage=1 00:09:32.186 --rc genhtml_legend=1 00:09:32.186 --rc geninfo_all_blocks=1 00:09:32.186 --rc geninfo_unexecuted_blocks=1 00:09:32.186 00:09:32.186 ' 00:09:32.186 14:02:33 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.186 --rc genhtml_branch_coverage=1 00:09:32.186 --rc genhtml_function_coverage=1 00:09:32.186 --rc genhtml_legend=1 00:09:32.186 --rc geninfo_all_blocks=1 00:09:32.186 --rc geninfo_unexecuted_blocks=1 00:09:32.186 00:09:32.186 ' 00:09:32.186 14:02:33 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.186 14:02:33 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.186 14:02:33 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.186 14:02:33 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.186 14:02:33 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.186 14:02:33 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:32.186 14:02:33 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:32.186 14:02:33 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:32.186 14:02:33 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.186 14:02:33 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:32.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.443 Waiting for block devices as requested 00:09:32.700 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.700 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.700 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.700 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.961 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:37.961 14:02:39 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:37.961 14:02:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.961 14:02:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:37.961 14:02:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.961 14:02:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.961 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.962 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:37.963 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.964 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.965 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:37.966 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:37.967 14:02:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.967 14:02:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:37.967 14:02:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.967 14:02:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.967 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:37.968 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:37.969 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.970 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:37.971 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:37.972 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:37.973 14:02:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.973 14:02:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:37.973 14:02:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.973 14:02:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.973 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.974 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:37.975 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.976 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.241 14:02:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.242 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.243 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.244 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:38.245 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.246 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.247 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.248 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:38.249 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.250 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:38.251 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.252 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:38.253 14:02:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.253 14:02:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:38.253 14:02:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.253 14:02:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.253 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:38.254 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.255 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:38.256 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:38.257 14:02:39 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:38.257 14:02:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:38.258 14:02:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:38.258 14:02:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:38.258 14:02:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:38.258 14:02:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.258 14:02:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.258 14:02:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.258 14:02:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.258 14:02:40 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:38.258 14:02:40 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:38.258 14:02:40 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:38.258 14:02:40 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:38.258 14:02:40 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:38.258 14:02:40 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:38.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.082 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.082 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.082 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.339 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.339 14:02:40 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:39.339 14:02:40 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:39.339 14:02:40 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.339 14:02:40 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:39.339 ************************************ 00:09:39.339 START TEST nvme_flexible_data_placement 00:09:39.339 ************************************ 00:09:39.339 14:02:40 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:39.597 Initializing NVMe Controllers 00:09:39.597 Attaching to 0000:00:13.0 00:09:39.597 Controller supports FDP Attached to 0000:00:13.0 00:09:39.597 Namespace ID: 1 Endurance Group ID: 1 00:09:39.597 Initialization complete. 00:09:39.597 00:09:39.597 ================================== 00:09:39.597 == FDP tests for Namespace: #01 == 00:09:39.597 ================================== 00:09:39.597 00:09:39.597 Get Feature: FDP: 00:09:39.597 ================= 00:09:39.597 Enabled: Yes 00:09:39.597 FDP configuration Index: 0 00:09:39.597 00:09:39.597 FDP configurations log page 00:09:39.597 =========================== 00:09:39.597 Number of FDP configurations: 1 00:09:39.597 Version: 0 00:09:39.597 Size: 112 00:09:39.597 FDP Configuration Descriptor: 0 00:09:39.597 Descriptor Size: 96 00:09:39.597 Reclaim Group Identifier format: 2 00:09:39.597 FDP Volatile Write Cache: Not Present 00:09:39.597 FDP Configuration: Valid 00:09:39.597 Vendor Specific Size: 0 00:09:39.597 Number of Reclaim Groups: 2 00:09:39.597 Number of Recalim Unit Handles: 8 00:09:39.597 Max Placement Identifiers: 128 00:09:39.597 Number of Namespaces Suppprted: 256 00:09:39.597 Reclaim unit Nominal Size: 6000000 bytes 00:09:39.597 Estimated Reclaim Unit Time Limit: Not Reported 00:09:39.597 RUH Desc #000: RUH Type: Initially Isolated 00:09:39.597 RUH Desc #001: RUH Type: Initially Isolated 00:09:39.597 RUH Desc #002: RUH Type: Initially Isolated 00:09:39.597 RUH Desc #003: RUH Type: Initially Isolated 00:09:39.598 RUH Desc #004: RUH Type: Initially Isolated 00:09:39.598 RUH Desc #005: RUH Type: Initially Isolated 00:09:39.598 RUH Desc #006: RUH Type: Initially Isolated 00:09:39.598 RUH Desc #007: RUH Type: Initially Isolated 00:09:39.598 00:09:39.598 FDP reclaim unit handle usage log page 00:09:39.598 ====================================== 00:09:39.598 Number of Reclaim Unit Handles: 8 00:09:39.598 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:39.598 RUH Usage Desc #001: RUH Attributes: Unused 00:09:39.598 RUH Usage Desc #002: RUH Attributes: Unused 00:09:39.598 RUH Usage Desc #003: RUH Attributes: Unused 00:09:39.598 RUH Usage Desc #004: RUH Attributes: Unused 00:09:39.598 RUH Usage Desc #005: RUH Attributes: Unused 00:09:39.598 RUH Usage Desc #006: RUH Attributes: Unused 00:09:39.598 RUH Usage Desc #007: RUH Attributes: Unused 00:09:39.598 00:09:39.598 FDP statistics log page 00:09:39.598 ======================= 00:09:39.598 Host bytes with metadata written: 1020203008 00:09:39.598 Media bytes with metadata written: 1020448768 00:09:39.598 Media bytes erased: 0 00:09:39.598 00:09:39.598 FDP Reclaim unit handle status 00:09:39.598 ============================== 00:09:39.598 Number of RUHS descriptors: 2 00:09:39.598 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000530f 00:09:39.598 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:39.598 00:09:39.598 FDP write on placement id: 0 success 00:09:39.598 00:09:39.598 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:39.598 00:09:39.598 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:39.598 00:09:39.598 Get Feature: FDP Events for Placement handle: #0 00:09:39.598 ======================== 00:09:39.598 Number of FDP Events: 6 00:09:39.598 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:39.598 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:39.598 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:39.598 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:39.598 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:39.598 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:39.598 00:09:39.598 FDP events log page 00:09:39.598 =================== 00:09:39.598 Number of FDP events: 1 00:09:39.598 FDP Event #0: 00:09:39.598 Event Type: RU Not Written to Capacity 00:09:39.598 Placement Identifier: Valid 00:09:39.598 NSID: Valid 00:09:39.598 Location: Valid 00:09:39.598 Placement Identifier: 0 00:09:39.598 Event Timestamp: 5 00:09:39.598 Namespace Identifier: 1 00:09:39.598 Reclaim Group Identifier: 0 00:09:39.598 Reclaim Unit Handle Identifier: 0 00:09:39.598 00:09:39.598 FDP test passed 00:09:39.598 00:09:39.598 real 0m0.236s 00:09:39.598 user 0m0.071s 00:09:39.598 sys 0m0.064s 00:09:39.598 14:02:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.598 14:02:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:39.598 ************************************ 00:09:39.598 END TEST nvme_flexible_data_placement 00:09:39.598 ************************************ 00:09:39.598 00:09:39.598 real 0m7.603s 00:09:39.598 user 0m1.134s 00:09:39.598 sys 0m1.345s 00:09:39.598 14:02:41 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.598 14:02:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:39.598 ************************************ 00:09:39.598 END TEST nvme_fdp 00:09:39.598 ************************************ 00:09:39.598 14:02:41 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:39.598 14:02:41 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:39.598 14:02:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.598 14:02:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.598 14:02:41 -- common/autotest_common.sh@10 -- # set +x 00:09:39.598 ************************************ 00:09:39.598 START TEST nvme_rpc 00:09:39.598 ************************************ 00:09:39.598 14:02:41 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:39.598 * Looking for test storage... 00:09:39.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.598 14:02:41 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.598 14:02:41 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.598 14:02:41 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.856 14:02:41 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.856 14:02:41 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.856 14:02:41 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.856 14:02:41 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.856 14:02:41 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.856 14:02:41 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.857 14:02:41 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.857 --rc genhtml_branch_coverage=1 00:09:39.857 --rc genhtml_function_coverage=1 00:09:39.857 --rc genhtml_legend=1 00:09:39.857 --rc geninfo_all_blocks=1 00:09:39.857 --rc geninfo_unexecuted_blocks=1 00:09:39.857 00:09:39.857 ' 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.857 --rc genhtml_branch_coverage=1 00:09:39.857 --rc genhtml_function_coverage=1 00:09:39.857 --rc genhtml_legend=1 00:09:39.857 --rc geninfo_all_blocks=1 00:09:39.857 --rc geninfo_unexecuted_blocks=1 00:09:39.857 00:09:39.857 ' 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.857 --rc genhtml_branch_coverage=1 00:09:39.857 --rc genhtml_function_coverage=1 00:09:39.857 --rc genhtml_legend=1 00:09:39.857 --rc geninfo_all_blocks=1 00:09:39.857 --rc geninfo_unexecuted_blocks=1 00:09:39.857 00:09:39.857 ' 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.857 --rc genhtml_branch_coverage=1 00:09:39.857 --rc genhtml_function_coverage=1 00:09:39.857 --rc genhtml_legend=1 00:09:39.857 --rc geninfo_all_blocks=1 00:09:39.857 --rc geninfo_unexecuted_blocks=1 00:09:39.857 00:09:39.857 ' 00:09:39.857 14:02:41 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.857 14:02:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:39.857 14:02:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:39.857 14:02:41 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65790 00:09:39.857 14:02:41 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:39.857 14:02:41 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:39.857 14:02:41 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65790 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65790 ']' 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.857 14:02:41 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.857 [2024-12-09 14:02:41.564999] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:09:39.857 [2024-12-09 14:02:41.565113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65790 ] 00:09:40.115 [2024-12-09 14:02:41.735787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.115 [2024-12-09 14:02:41.833184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.115 [2024-12-09 14:02:41.833280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.681 14:02:42 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.681 14:02:42 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:40.681 14:02:42 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:40.939 Nvme0n1 00:09:40.939 14:02:42 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:40.939 14:02:42 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:41.197 request: 00:09:41.197 { 00:09:41.197 "bdev_name": "Nvme0n1", 00:09:41.197 "filename": "non_existing_file", 00:09:41.197 "method": "bdev_nvme_apply_firmware", 00:09:41.197 "req_id": 1 00:09:41.197 } 00:09:41.197 Got JSON-RPC error response 00:09:41.197 response: 00:09:41.197 { 00:09:41.197 "code": -32603, 00:09:41.197 "message": "open file failed." 00:09:41.197 } 00:09:41.197 14:02:42 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:41.197 14:02:42 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:41.197 14:02:42 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:41.455 14:02:43 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:41.455 14:02:43 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65790 00:09:41.455 14:02:43 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65790 ']' 00:09:41.455 14:02:43 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65790 00:09:41.455 14:02:43 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:41.455 14:02:43 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.455 14:02:43 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65790 00:09:41.455 14:02:43 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.455 14:02:43 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.455 killing process with pid 65790 00:09:41.455 14:02:43 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65790' 00:09:41.455 14:02:43 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65790 00:09:41.455 14:02:43 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65790 00:09:42.830 00:09:42.830 real 0m3.201s 00:09:42.830 user 0m6.071s 00:09:42.830 sys 0m0.452s 00:09:42.830 14:02:44 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.830 14:02:44 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.830 ************************************ 00:09:42.830 END TEST nvme_rpc 00:09:42.830 ************************************ 00:09:42.830 14:02:44 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:42.830 14:02:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.830 14:02:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.830 14:02:44 -- common/autotest_common.sh@10 -- # set +x 00:09:42.830 ************************************ 00:09:42.830 START TEST nvme_rpc_timeouts 00:09:42.830 ************************************ 00:09:42.830 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:42.830 * Looking for test storage... 00:09:42.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.830 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.830 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.830 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.088 14:02:44 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.088 --rc genhtml_branch_coverage=1 00:09:43.088 --rc genhtml_function_coverage=1 00:09:43.088 --rc genhtml_legend=1 00:09:43.088 --rc geninfo_all_blocks=1 00:09:43.088 --rc geninfo_unexecuted_blocks=1 00:09:43.088 00:09:43.088 ' 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.088 --rc genhtml_branch_coverage=1 00:09:43.088 --rc genhtml_function_coverage=1 00:09:43.088 --rc genhtml_legend=1 00:09:43.088 --rc geninfo_all_blocks=1 00:09:43.088 --rc geninfo_unexecuted_blocks=1 00:09:43.088 00:09:43.088 ' 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.088 --rc genhtml_branch_coverage=1 00:09:43.088 --rc genhtml_function_coverage=1 00:09:43.088 --rc genhtml_legend=1 00:09:43.088 --rc geninfo_all_blocks=1 00:09:43.088 --rc geninfo_unexecuted_blocks=1 00:09:43.088 00:09:43.088 ' 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.088 --rc genhtml_branch_coverage=1 00:09:43.088 --rc genhtml_function_coverage=1 00:09:43.088 --rc genhtml_legend=1 00:09:43.088 --rc geninfo_all_blocks=1 00:09:43.088 --rc geninfo_unexecuted_blocks=1 00:09:43.088 00:09:43.088 ' 00:09:43.088 14:02:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.088 14:02:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65855 00:09:43.088 14:02:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65855 00:09:43.088 14:02:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65887 00:09:43.088 14:02:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:43.088 14:02:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:43.088 14:02:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65887 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65887 ']' 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.088 14:02:44 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:43.088 [2024-12-09 14:02:44.754114] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:09:43.088 [2024-12-09 14:02:44.754239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65887 ] 00:09:43.345 [2024-12-09 14:02:44.909768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.345 [2024-12-09 14:02:45.007963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.345 [2024-12-09 14:02:45.008051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.910 14:02:45 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.910 14:02:45 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:43.910 14:02:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:43.910 Checking default timeout settings: 00:09:43.910 14:02:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.168 Making settings changes with rpc: 00:09:44.168 14:02:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:44.168 14:02:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:44.426 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:44.426 Check default vs. modified settings: 00:09:44.426 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.685 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:44.685 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.685 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65855 00:09:44.685 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.685 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.685 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:44.685 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65855 00:09:44.685 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.685 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.942 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:44.942 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:44.942 Setting action_on_timeout is changed as expected. 00:09:44.942 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:44.942 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65855 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65855 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:44.943 Setting timeout_us is changed as expected. 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65855 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65855 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:44.943 Setting timeout_admin_us is changed as expected. 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65855 /tmp/settings_modified_65855 00:09:44.943 14:02:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65887 00:09:44.943 14:02:46 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65887 ']' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65887 00:09:44.943 14:02:46 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:44.943 14:02:46 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65887 00:09:44.943 14:02:46 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.943 killing process with pid 65887 00:09:44.943 14:02:46 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65887' 00:09:44.943 14:02:46 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65887 00:09:44.943 14:02:46 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65887 00:09:46.314 RPC TIMEOUT SETTING TEST PASSED. 00:09:46.315 14:02:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:46.315 00:09:46.315 real 0m3.380s 00:09:46.315 user 0m6.613s 00:09:46.315 sys 0m0.450s 00:09:46.315 14:02:47 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.315 14:02:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:46.315 ************************************ 00:09:46.315 END TEST nvme_rpc_timeouts 00:09:46.315 ************************************ 00:09:46.315 14:02:47 -- spdk/autotest.sh@239 -- # uname -s 00:09:46.315 14:02:47 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:46.315 14:02:47 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:46.315 14:02:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.315 14:02:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.315 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.315 ************************************ 00:09:46.315 START TEST sw_hotplug 00:09:46.315 ************************************ 00:09:46.315 14:02:47 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:46.315 * Looking for test storage... 00:09:46.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:46.315 14:02:48 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:46.315 14:02:48 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.315 14:02:48 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:46.315 14:02:48 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.315 14:02:48 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:46.315 14:02:48 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.315 14:02:48 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:46.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.315 --rc genhtml_branch_coverage=1 00:09:46.315 --rc genhtml_function_coverage=1 00:09:46.315 --rc genhtml_legend=1 00:09:46.315 --rc geninfo_all_blocks=1 00:09:46.315 --rc geninfo_unexecuted_blocks=1 00:09:46.315 00:09:46.315 ' 00:09:46.315 14:02:48 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:46.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.315 --rc genhtml_branch_coverage=1 00:09:46.315 --rc genhtml_function_coverage=1 00:09:46.315 --rc genhtml_legend=1 00:09:46.315 --rc geninfo_all_blocks=1 00:09:46.315 --rc geninfo_unexecuted_blocks=1 00:09:46.315 00:09:46.315 ' 00:09:46.315 14:02:48 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:46.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.315 --rc genhtml_branch_coverage=1 00:09:46.315 --rc genhtml_function_coverage=1 00:09:46.315 --rc genhtml_legend=1 00:09:46.315 --rc geninfo_all_blocks=1 00:09:46.315 --rc geninfo_unexecuted_blocks=1 00:09:46.315 00:09:46.315 ' 00:09:46.315 14:02:48 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:46.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.315 --rc genhtml_branch_coverage=1 00:09:46.315 --rc genhtml_function_coverage=1 00:09:46.315 --rc genhtml_legend=1 00:09:46.315 --rc geninfo_all_blocks=1 00:09:46.315 --rc geninfo_unexecuted_blocks=1 00:09:46.315 00:09:46.315 ' 00:09:46.315 14:02:48 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:46.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.831 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.831 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.831 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.831 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.831 14:02:48 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:46.831 14:02:48 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:46.831 14:02:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:46.831 14:02:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.831 14:02:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.832 14:02:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.832 14:02:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:46.832 14:02:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.832 14:02:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.832 14:02:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.832 14:02:48 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:46.832 14:02:48 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:46.832 14:02:48 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:46.832 14:02:48 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:46.832 14:02:48 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:47.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.348 Waiting for block devices as requested 00:09:47.348 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.348 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.348 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.606 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.869 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:52.869 14:02:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:52.869 14:02:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:52.869 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:52.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.869 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:53.127 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:53.385 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.385 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.385 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:53.385 14:02:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66738 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:53.642 14:02:55 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:53.642 14:02:55 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:53.642 14:02:55 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:53.642 14:02:55 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:53.642 14:02:55 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:53.642 14:02:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:53.642 Initializing NVMe Controllers 00:09:53.642 Attaching to 0000:00:10.0 00:09:53.642 Attaching to 0000:00:11.0 00:09:53.642 Attached to 0000:00:10.0 00:09:53.642 Attached to 0000:00:11.0 00:09:53.642 Initialization complete. Starting I/O... 00:09:53.642 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:53.642 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:53.642 00:09:55.019 QEMU NVMe Ctrl (12340 ): 2557 I/Os completed (+2557) 00:09:55.019 QEMU NVMe Ctrl (12341 ): 2538 I/Os completed (+2538) 00:09:55.019 00:09:55.952 QEMU NVMe Ctrl (12340 ): 5727 I/Os completed (+3170) 00:09:55.952 QEMU NVMe Ctrl (12341 ): 5642 I/Os completed (+3104) 00:09:55.953 00:09:56.886 QEMU NVMe Ctrl (12340 ): 9093 I/Os completed (+3366) 00:09:56.886 QEMU NVMe Ctrl (12341 ): 9173 I/Os completed (+3531) 00:09:56.886 00:09:57.817 QEMU NVMe Ctrl (12340 ): 12059 I/Os completed (+2966) 00:09:57.817 QEMU NVMe Ctrl (12341 ): 12292 I/Os completed (+3119) 00:09:57.817 00:09:58.750 QEMU NVMe Ctrl (12340 ): 15742 I/Os completed (+3683) 00:09:58.750 QEMU NVMe Ctrl (12341 ): 16158 I/Os completed (+3866) 00:09:58.750 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.682 [2024-12-09 14:03:01.213148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:59.682 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:59.682 [2024-12-09 14:03:01.216824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.217128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.217329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.217571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.682 [2024-12-09 14:03:01.222663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.222791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.222825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.222929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.682 [2024-12-09 14:03:01.239092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:59.682 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:59.682 [2024-12-09 14:03:01.240263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.240373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.240447] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.240483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.682 [2024-12-09 14:03:01.242263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.242359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.242430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 [2024-12-09 14:03:01.242459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.682 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:59.682 EAL: Scan for (pci) bus failed. 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:59.682 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.682 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:59.682 Attaching to 0000:00:10.0 00:09:59.682 Attached to 0000:00:10.0 00:09:59.939 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:59.939 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.939 14:03:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:59.939 Attaching to 0000:00:11.0 00:09:59.939 Attached to 0000:00:11.0 00:10:00.871 QEMU NVMe Ctrl (12340 ): 3289 I/Os completed (+3289) 00:10:00.871 QEMU NVMe Ctrl (12341 ): 3242 I/Os completed (+3242) 00:10:00.871 00:10:01.805 QEMU NVMe Ctrl (12340 ): 6720 I/Os completed (+3431) 00:10:01.805 QEMU NVMe Ctrl (12341 ): 6829 I/Os completed (+3587) 00:10:01.805 00:10:02.739 QEMU NVMe Ctrl (12340 ): 10328 I/Os completed (+3608) 00:10:02.739 QEMU NVMe Ctrl (12341 ): 10552 I/Os completed (+3723) 00:10:02.739 00:10:03.672 QEMU NVMe Ctrl (12340 ): 13730 I/Os completed (+3402) 00:10:03.672 QEMU NVMe Ctrl (12341 ): 13871 I/Os completed (+3319) 00:10:03.672 00:10:05.045 QEMU NVMe Ctrl (12340 ): 16878 I/Os completed (+3148) 00:10:05.045 QEMU NVMe Ctrl (12341 ): 16996 I/Os completed (+3125) 00:10:05.045 00:10:05.610 QEMU NVMe Ctrl (12340 ): 20476 I/Os completed (+3598) 00:10:05.610 QEMU NVMe Ctrl (12341 ): 20357 I/Os completed (+3361) 00:10:05.610 00:10:06.983 QEMU NVMe Ctrl (12340 ): 23917 I/Os completed (+3441) 00:10:06.983 QEMU NVMe Ctrl (12341 ): 23826 I/Os completed (+3469) 00:10:06.983 00:10:07.917 QEMU NVMe Ctrl (12340 ): 27154 I/Os completed (+3237) 00:10:07.917 QEMU NVMe Ctrl (12341 ): 27087 I/Os completed (+3261) 00:10:07.917 00:10:08.852 QEMU NVMe Ctrl (12340 ): 30370 I/Os completed (+3216) 00:10:08.852 QEMU NVMe Ctrl (12341 ): 30311 I/Os completed (+3224) 00:10:08.852 00:10:09.789 QEMU NVMe Ctrl (12340 ): 33801 I/Os completed (+3431) 00:10:09.789 QEMU NVMe Ctrl (12341 ): 33730 I/Os completed (+3419) 00:10:09.789 00:10:10.727 QEMU NVMe Ctrl (12340 ): 36994 I/Os completed (+3193) 00:10:10.727 QEMU NVMe Ctrl (12341 ): 36907 I/Os completed (+3177) 00:10:10.727 00:10:11.664 QEMU NVMe Ctrl (12340 ): 40211 I/Os completed (+3217) 00:10:11.664 QEMU NVMe Ctrl (12341 ): 40066 I/Os completed (+3159) 00:10:11.664 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.926 [2024-12-09 14:03:13.494860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:11.926 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:11.926 [2024-12-09 14:03:13.496031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.496084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.496101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.496119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:11.926 [2024-12-09 14:03:13.498078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.498125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.498140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.498153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.926 [2024-12-09 14:03:13.515307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:11.926 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:11.926 [2024-12-09 14:03:13.516532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.516655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.516731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.516750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:11.926 [2024-12-09 14:03:13.518426] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.518465] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.518482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 [2024-12-09 14:03:13.518503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.926 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:11.926 EAL: Scan for (pci) bus failed. 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.926 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:11.926 Attaching to 0000:00:10.0 00:10:12.185 Attached to 0000:00:10.0 00:10:12.185 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:12.185 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.185 14:03:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:12.185 Attaching to 0000:00:11.0 00:10:12.185 Attached to 0000:00:11.0 00:10:12.750 QEMU NVMe Ctrl (12340 ): 2058 I/Os completed (+2058) 00:10:12.750 QEMU NVMe Ctrl (12341 ): 1809 I/Os completed (+1809) 00:10:12.750 00:10:13.687 QEMU NVMe Ctrl (12340 ): 5089 I/Os completed (+3031) 00:10:13.687 QEMU NVMe Ctrl (12341 ): 4776 I/Os completed (+2967) 00:10:13.687 00:10:14.629 QEMU NVMe Ctrl (12340 ): 8113 I/Os completed (+3024) 00:10:14.629 QEMU NVMe Ctrl (12341 ): 7834 I/Os completed (+3058) 00:10:14.629 00:10:16.015 QEMU NVMe Ctrl (12340 ): 11293 I/Os completed (+3180) 00:10:16.015 QEMU NVMe Ctrl (12341 ): 11017 I/Os completed (+3183) 00:10:16.015 00:10:16.958 QEMU NVMe Ctrl (12340 ): 14435 I/Os completed (+3142) 00:10:16.958 QEMU NVMe Ctrl (12341 ): 14153 I/Os completed (+3136) 00:10:16.958 00:10:17.899 QEMU NVMe Ctrl (12340 ): 17570 I/Os completed (+3135) 00:10:17.899 QEMU NVMe Ctrl (12341 ): 17312 I/Os completed (+3159) 00:10:17.899 00:10:18.839 QEMU NVMe Ctrl (12340 ): 20710 I/Os completed (+3140) 00:10:18.839 QEMU NVMe Ctrl (12341 ): 20451 I/Os completed (+3139) 00:10:18.839 00:10:19.770 QEMU NVMe Ctrl (12340 ): 23859 I/Os completed (+3149) 00:10:19.770 QEMU NVMe Ctrl (12341 ): 23717 I/Os completed (+3266) 00:10:19.770 00:10:20.703 QEMU NVMe Ctrl (12340 ): 27047 I/Os completed (+3188) 00:10:20.703 QEMU NVMe Ctrl (12341 ): 26864 I/Os completed (+3147) 00:10:20.703 00:10:21.636 QEMU NVMe Ctrl (12340 ): 30381 I/Os completed (+3334) 00:10:21.636 QEMU NVMe Ctrl (12341 ): 30101 I/Os completed (+3237) 00:10:21.636 00:10:22.711 QEMU NVMe Ctrl (12340 ): 33613 I/Os completed (+3232) 00:10:22.711 QEMU NVMe Ctrl (12341 ): 33347 I/Os completed (+3246) 00:10:22.711 00:10:23.646 QEMU NVMe Ctrl (12340 ): 36756 I/Os completed (+3143) 00:10:23.646 QEMU NVMe Ctrl (12341 ): 36398 I/Os completed (+3051) 00:10:23.646 00:10:24.211 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:24.211 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.212 [2024-12-09 14:03:25.795792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:24.212 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:24.212 [2024-12-09 14:03:25.798224] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.798277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.798295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.798311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.212 [2024-12-09 14:03:25.800154] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.800193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.800207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.800221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.212 [2024-12-09 14:03:25.820752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:24.212 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:24.212 [2024-12-09 14:03:25.821824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.821866] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.821885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.821900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.212 [2024-12-09 14:03:25.823669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.823707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.823724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 [2024-12-09 14:03:25.823737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.212 14:03:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.212 Attaching to 0000:00:10.0 00:10:24.212 Attached to 0000:00:10.0 00:10:24.469 14:03:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.469 14:03:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.469 14:03:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.469 Attaching to 0000:00:11.0 00:10:24.469 Attached to 0000:00:11.0 00:10:24.469 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.469 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.469 [2024-12-09 14:03:26.065146] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:36.661 14:03:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.661 14:03:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.661 14:03:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.85 00:10:36.661 14:03:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.85 00:10:36.661 14:03:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:36.661 14:03:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.85 00:10:36.661 14:03:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.85 2 00:10:36.661 remove_attach_helper took 42.85s to complete (handling 2 nvme drive(s)) 14:03:38 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66738 00:10:43.295 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66738) - No such process 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66738 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67287 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67287 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67287 ']' 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.295 [2024-12-09 14:03:44.146314] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:10:43.295 [2024-12-09 14:03:44.146594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67287 ] 00:10:43.295 [2024-12-09 14:03:44.303859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.295 [2024-12-09 14:03:44.401167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:43.295 14:03:44 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:43.295 14:03:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:49.890 14:03:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.890 14:03:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.890 14:03:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.890 14:03:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:49.890 [2024-12-09 14:03:51.087192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:49.890 [2024-12-09 14:03:51.088592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.890 [2024-12-09 14:03:51.088737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.890 [2024-12-09 14:03:51.088755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.890 [2024-12-09 14:03:51.088774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.890 [2024-12-09 14:03:51.088782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.890 [2024-12-09 14:03:51.088791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.890 [2024-12-09 14:03:51.088799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.890 [2024-12-09 14:03:51.088807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.890 [2024-12-09 14:03:51.088814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.890 [2024-12-09 14:03:51.088827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.890 [2024-12-09 14:03:51.088833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.890 [2024-12-09 14:03:51.088841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.890 14:03:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.890 14:03:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.890 [2024-12-09 14:03:51.587175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:49.890 [2024-12-09 14:03:51.588566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.890 [2024-12-09 14:03:51.588593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.890 [2024-12-09 14:03:51.588604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.890 [2024-12-09 14:03:51.588619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.890 [2024-12-09 14:03:51.588628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.890 [2024-12-09 14:03:51.588635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.890 [2024-12-09 14:03:51.588643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.890 [2024-12-09 14:03:51.588651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.890 [2024-12-09 14:03:51.588658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.890 [2024-12-09 14:03:51.588666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.890 [2024-12-09 14:03:51.588674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.890 [2024-12-09 14:03:51.588680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.890 14:03:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:49.890 14:03:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.457 14:03:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.457 14:03:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.457 14:03:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.457 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:50.715 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:50.715 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.715 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.715 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.715 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:50.715 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:50.715 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.715 14:03:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.922 14:04:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.922 14:04:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.922 14:04:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.922 [2024-12-09 14:04:04.487390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:02.922 [2024-12-09 14:04:04.488939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.922 [2024-12-09 14:04:04.489043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.922 [2024-12-09 14:04:04.489109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.922 [2024-12-09 14:04:04.489229] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.922 [2024-12-09 14:04:04.489336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.922 [2024-12-09 14:04:04.489369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:02.922 [2024-12-09 14:04:04.489481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.922 [2024-12-09 14:04:04.489585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.922 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.922 [2024-12-09 14:04:04.489642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.922 [2024-12-09 14:04:04.489671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.922 [2024-12-09 14:04:04.489718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.922 [2024-12-09 14:04:04.489758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.922 14:04:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.922 14:04:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.922 14:04:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:02.922 14:04:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:03.488 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:03.488 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.488 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.488 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.488 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.488 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.488 14:04:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.488 14:04:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.488 14:04:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.488 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:03.488 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:03.488 [2024-12-09 14:04:05.187392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:03.488 [2024-12-09 14:04:05.188780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.488 [2024-12-09 14:04:05.188910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.488 [2024-12-09 14:04:05.188929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.488 [2024-12-09 14:04:05.188946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.488 [2024-12-09 14:04:05.188955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.488 [2024-12-09 14:04:05.188962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.488 [2024-12-09 14:04:05.188971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.488 [2024-12-09 14:04:05.188978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.488 [2024-12-09 14:04:05.188986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.488 [2024-12-09 14:04:05.188994] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.488 [2024-12-09 14:04:05.189001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.488 [2024-12-09 14:04:05.189008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.053 14:04:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.053 14:04:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.053 14:04:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.053 14:04:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.256 14:04:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.256 14:04:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.256 14:04:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.256 [2024-12-09 14:04:17.887598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.256 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:16.256 EAL: Scan for (pci) bus failed. 00:11:16.256 [2024-12-09 14:04:17.889405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.256 [2024-12-09 14:04:17.889431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.256 [2024-12-09 14:04:17.889441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.256 [2024-12-09 14:04:17.889458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.256 [2024-12-09 14:04:17.889465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.256 [2024-12-09 14:04:17.889475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.256 [2024-12-09 14:04:17.889482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.256 [2024-12-09 14:04:17.889490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.256 [2024-12-09 14:04:17.889496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.256 [2024-12-09 14:04:17.889505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.256 [2024-12-09 14:04:17.889511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.256 [2024-12-09 14:04:17.889519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.256 14:04:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.256 14:04:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.256 14:04:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:16.256 14:04:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:16.514 [2024-12-09 14:04:18.287610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:16.514 [2024-12-09 14:04:18.288945] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.514 [2024-12-09 14:04:18.288975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.514 [2024-12-09 14:04:18.288985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.514 [2024-12-09 14:04:18.288998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.514 [2024-12-09 14:04:18.289007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.514 [2024-12-09 14:04:18.289014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.514 [2024-12-09 14:04:18.289025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.514 [2024-12-09 14:04:18.289031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.514 [2024-12-09 14:04:18.289040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.514 [2024-12-09 14:04:18.289048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.514 [2024-12-09 14:04:18.289055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.514 [2024-12-09 14:04:18.289062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.772 14:04:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.772 14:04:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.772 14:04:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.772 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:17.030 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:17.030 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.030 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.030 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.030 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:17.030 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:17.030 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.030 14:04:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:29.233 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:29.233 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:29.233 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:29.233 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.233 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.233 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.233 14:04:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.233 14:04:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.233 14:04:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.233 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:29.233 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:29.233 14:04:30 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.74 00:11:29.233 14:04:30 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.74 00:11:29.233 14:04:30 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:29.233 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.74 00:11:29.233 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.74 2 00:11:29.233 remove_attach_helper took 45.74s to complete (handling 2 nvme drive(s)) 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:29.233 14:04:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.233 14:04:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.233 14:04:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.234 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:29.234 14:04:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.234 14:04:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.234 14:04:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.234 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:29.234 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:29.234 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:29.234 14:04:30 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:29.234 14:04:30 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:29.234 14:04:30 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:29.234 14:04:30 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:29.234 14:04:30 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:29.234 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:29.234 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:29.234 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:29.234 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:29.234 14:04:30 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.789 14:04:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.789 14:04:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.789 14:04:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:35.789 14:04:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:35.789 [2024-12-09 14:04:36.855545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:35.789 [2024-12-09 14:04:36.856674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.789 [2024-12-09 14:04:36.856774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.789 [2024-12-09 14:04:36.856831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.789 [2024-12-09 14:04:36.856896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.789 [2024-12-09 14:04:36.856914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.789 [2024-12-09 14:04:36.856940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.789 [2024-12-09 14:04:36.856998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.789 [2024-12-09 14:04:36.857058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.789 [2024-12-09 14:04:36.857082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.789 [2024-12-09 14:04:36.857108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.789 [2024-12-09 14:04:36.857154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.789 [2024-12-09 14:04:36.857185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.789 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:35.789 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.789 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.789 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.789 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.789 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.789 14:04:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.790 14:04:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.790 [2024-12-09 14:04:37.355535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:35.790 14:04:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.790 [2024-12-09 14:04:37.356558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.790 [2024-12-09 14:04:37.356586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.790 [2024-12-09 14:04:37.356598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.790 [2024-12-09 14:04:37.356613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.790 [2024-12-09 14:04:37.356621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.790 [2024-12-09 14:04:37.356629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.790 [2024-12-09 14:04:37.356640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.790 [2024-12-09 14:04:37.356647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.790 [2024-12-09 14:04:37.356655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.790 [2024-12-09 14:04:37.356663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.790 [2024-12-09 14:04:37.356670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.790 [2024-12-09 14:04:37.356677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.790 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:35.790 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.355 14:04:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.355 14:04:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.355 14:04:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.355 14:04:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.355 14:04:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.355 14:04:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.355 14:04:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.355 14:04:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.355 14:04:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.355 14:04:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.355 14:04:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.355 14:04:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.599 14:04:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.599 14:04:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.599 14:04:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.599 14:04:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.599 14:04:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.599 14:04:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:48.599 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:48.599 [2024-12-09 14:04:50.255732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:48.599 [2024-12-09 14:04:50.256738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.599 [2024-12-09 14:04:50.256766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.599 [2024-12-09 14:04:50.256776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.599 [2024-12-09 14:04:50.256792] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.599 [2024-12-09 14:04:50.256799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.599 [2024-12-09 14:04:50.256808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.599 [2024-12-09 14:04:50.256816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.599 [2024-12-09 14:04:50.256824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.599 [2024-12-09 14:04:50.256830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.599 [2024-12-09 14:04:50.256839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.599 [2024-12-09 14:04:50.256845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.599 [2024-12-09 14:04:50.256854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.164 [2024-12-09 14:04:50.655750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:49.164 [2024-12-09 14:04:50.656790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.164 [2024-12-09 14:04:50.656922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.164 [2024-12-09 14:04:50.656941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.164 [2024-12-09 14:04:50.656957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.164 [2024-12-09 14:04:50.656968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.164 [2024-12-09 14:04:50.656975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.164 [2024-12-09 14:04:50.656984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.164 [2024-12-09 14:04:50.656991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.164 [2024-12-09 14:04:50.657000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.164 [2024-12-09 14:04:50.657007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.164 [2024-12-09 14:04:50.657015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.164 [2024-12-09 14:04:50.657022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.164 14:04:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.164 14:04:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.164 14:04:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.164 14:04:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.422 14:04:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.422 14:04:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.422 14:04:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.622 14:05:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.622 14:05:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.622 14:05:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.622 14:05:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.622 14:05:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.622 14:05:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:01.622 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.622 [2024-12-09 14:05:03.155939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:01.622 [2024-12-09 14:05:03.158577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.622 [2024-12-09 14:05:03.158696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.622 [2024-12-09 14:05:03.158712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.622 [2024-12-09 14:05:03.158730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.622 [2024-12-09 14:05:03.158738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.622 [2024-12-09 14:05:03.158747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.622 [2024-12-09 14:05:03.158756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.622 [2024-12-09 14:05:03.158766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.622 [2024-12-09 14:05:03.158773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.622 [2024-12-09 14:05:03.158781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.622 [2024-12-09 14:05:03.158788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.622 [2024-12-09 14:05:03.158796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.880 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:01.880 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.880 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.880 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.880 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.880 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.880 14:05:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.880 14:05:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.880 14:05:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.880 [2024-12-09 14:05:03.655940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:01.880 [2024-12-09 14:05:03.656973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.880 [2024-12-09 14:05:03.657005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.880 [2024-12-09 14:05:03.657017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.880 [2024-12-09 14:05:03.657033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.880 [2024-12-09 14:05:03.657043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.880 [2024-12-09 14:05:03.657050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.880 [2024-12-09 14:05:03.657059] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.880 [2024-12-09 14:05:03.657066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.880 [2024-12-09 14:05:03.657075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.880 [2024-12-09 14:05:03.657082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.880 [2024-12-09 14:05:03.657093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.880 [2024-12-09 14:05:03.657099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.880 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:01.880 14:05:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:02.444 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:02.444 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.444 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.444 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.444 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.444 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.444 14:05:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.444 14:05:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.444 14:05:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.444 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:02.444 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.702 14:05:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.70 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.70 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.70 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.70 2 00:12:14.905 remove_attach_helper took 45.70s to complete (handling 2 nvme drive(s)) 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:14.905 14:05:16 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67287 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67287 ']' 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67287 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67287 00:12:14.905 killing process with pid 67287 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67287' 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67287 00:12:14.905 14:05:16 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67287 00:12:16.282 14:05:17 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:16.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:16.854 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:16.854 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:16.854 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.854 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.854 00:12:16.854 real 2m30.564s 00:12:16.854 user 1m52.649s 00:12:16.854 sys 0m16.716s 00:12:16.854 14:05:18 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.854 14:05:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.854 ************************************ 00:12:16.854 END TEST sw_hotplug 00:12:16.854 ************************************ 00:12:16.854 14:05:18 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:16.854 14:05:18 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.854 14:05:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.854 14:05:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.854 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.854 ************************************ 00:12:16.854 START TEST nvme_xnvme 00:12:16.854 ************************************ 00:12:16.854 14:05:18 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:17.118 * Looking for test storage... 00:12:17.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.118 14:05:18 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:17.118 14:05:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:17.118 14:05:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:17.118 14:05:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.118 14:05:18 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:17.118 14:05:18 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.118 14:05:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:17.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.118 --rc genhtml_branch_coverage=1 00:12:17.118 --rc genhtml_function_coverage=1 00:12:17.118 --rc genhtml_legend=1 00:12:17.118 --rc geninfo_all_blocks=1 00:12:17.118 --rc geninfo_unexecuted_blocks=1 00:12:17.118 00:12:17.118 ' 00:12:17.118 14:05:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:17.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.118 --rc genhtml_branch_coverage=1 00:12:17.118 --rc genhtml_function_coverage=1 00:12:17.118 --rc genhtml_legend=1 00:12:17.118 --rc geninfo_all_blocks=1 00:12:17.118 --rc geninfo_unexecuted_blocks=1 00:12:17.118 00:12:17.118 ' 00:12:17.118 14:05:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:17.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.118 --rc genhtml_branch_coverage=1 00:12:17.118 --rc genhtml_function_coverage=1 00:12:17.118 --rc genhtml_legend=1 00:12:17.118 --rc geninfo_all_blocks=1 00:12:17.118 --rc geninfo_unexecuted_blocks=1 00:12:17.118 00:12:17.118 ' 00:12:17.118 14:05:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:17.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.118 --rc genhtml_branch_coverage=1 00:12:17.118 --rc genhtml_function_coverage=1 00:12:17.118 --rc genhtml_legend=1 00:12:17.119 --rc geninfo_all_blocks=1 00:12:17.119 --rc geninfo_unexecuted_blocks=1 00:12:17.119 00:12:17.119 ' 00:12:17.119 14:05:18 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:17.119 14:05:18 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:17.119 14:05:18 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:17.119 14:05:18 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:17.119 14:05:18 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:17.119 14:05:18 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:17.119 14:05:18 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:17.119 14:05:18 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:17.119 14:05:18 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:17.119 14:05:18 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:17.119 14:05:18 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:17.119 14:05:18 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:17.119 14:05:18 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:17.119 #define SPDK_CONFIG_H 00:12:17.119 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:17.119 #define SPDK_CONFIG_APPS 1 00:12:17.119 #define SPDK_CONFIG_ARCH native 00:12:17.119 #define SPDK_CONFIG_ASAN 1 00:12:17.119 #undef SPDK_CONFIG_AVAHI 00:12:17.119 #undef SPDK_CONFIG_CET 00:12:17.119 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:17.119 #define SPDK_CONFIG_COVERAGE 1 00:12:17.119 #define SPDK_CONFIG_CROSS_PREFIX 00:12:17.119 #undef SPDK_CONFIG_CRYPTO 00:12:17.119 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:17.119 #undef SPDK_CONFIG_CUSTOMOCF 00:12:17.119 #undef SPDK_CONFIG_DAOS 00:12:17.119 #define SPDK_CONFIG_DAOS_DIR 00:12:17.119 #define SPDK_CONFIG_DEBUG 1 00:12:17.119 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:17.119 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:17.119 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:17.119 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:17.120 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:17.120 #undef SPDK_CONFIG_DPDK_UADK 00:12:17.120 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:17.120 #define SPDK_CONFIG_EXAMPLES 1 00:12:17.120 #undef SPDK_CONFIG_FC 00:12:17.120 #define SPDK_CONFIG_FC_PATH 00:12:17.120 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:17.120 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:17.120 #define SPDK_CONFIG_FSDEV 1 00:12:17.120 #undef SPDK_CONFIG_FUSE 00:12:17.120 #undef SPDK_CONFIG_FUZZER 00:12:17.120 #define SPDK_CONFIG_FUZZER_LIB 00:12:17.120 #undef SPDK_CONFIG_GOLANG 00:12:17.120 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:17.120 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:17.120 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:17.120 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:17.120 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:17.120 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:17.120 #undef SPDK_CONFIG_HAVE_LZ4 00:12:17.120 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:17.120 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:17.120 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:17.120 #define SPDK_CONFIG_IDXD 1 00:12:17.120 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:17.120 #undef SPDK_CONFIG_IPSEC_MB 00:12:17.120 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:17.120 #define SPDK_CONFIG_ISAL 1 00:12:17.120 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:17.120 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:17.120 #define SPDK_CONFIG_LIBDIR 00:12:17.120 #undef SPDK_CONFIG_LTO 00:12:17.120 #define SPDK_CONFIG_MAX_LCORES 128 00:12:17.120 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:17.120 #define SPDK_CONFIG_NVME_CUSE 1 00:12:17.120 #undef SPDK_CONFIG_OCF 00:12:17.120 #define SPDK_CONFIG_OCF_PATH 00:12:17.120 #define SPDK_CONFIG_OPENSSL_PATH 00:12:17.120 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:17.120 #define SPDK_CONFIG_PGO_DIR 00:12:17.120 #undef SPDK_CONFIG_PGO_USE 00:12:17.120 #define SPDK_CONFIG_PREFIX /usr/local 00:12:17.120 #undef SPDK_CONFIG_RAID5F 00:12:17.120 #undef SPDK_CONFIG_RBD 00:12:17.120 #define SPDK_CONFIG_RDMA 1 00:12:17.120 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:17.120 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:17.120 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:17.120 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:17.120 #define SPDK_CONFIG_SHARED 1 00:12:17.120 #undef SPDK_CONFIG_SMA 00:12:17.120 #define SPDK_CONFIG_TESTS 1 00:12:17.120 #undef SPDK_CONFIG_TSAN 00:12:17.120 #define SPDK_CONFIG_UBLK 1 00:12:17.120 #define SPDK_CONFIG_UBSAN 1 00:12:17.120 #undef SPDK_CONFIG_UNIT_TESTS 00:12:17.120 #undef SPDK_CONFIG_URING 00:12:17.120 #define SPDK_CONFIG_URING_PATH 00:12:17.120 #undef SPDK_CONFIG_URING_ZNS 00:12:17.120 #undef SPDK_CONFIG_USDT 00:12:17.120 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:17.120 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:17.120 #undef SPDK_CONFIG_VFIO_USER 00:12:17.120 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:17.120 #define SPDK_CONFIG_VHOST 1 00:12:17.120 #define SPDK_CONFIG_VIRTIO 1 00:12:17.120 #undef SPDK_CONFIG_VTUNE 00:12:17.120 #define SPDK_CONFIG_VTUNE_DIR 00:12:17.120 #define SPDK_CONFIG_WERROR 1 00:12:17.120 #define SPDK_CONFIG_WPDK_DIR 00:12:17.120 #define SPDK_CONFIG_XNVME 1 00:12:17.120 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:17.120 14:05:18 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.120 14:05:18 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.120 14:05:18 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.120 14:05:18 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.120 14:05:18 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.120 14:05:18 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.120 14:05:18 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.120 14:05:18 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.120 14:05:18 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:17.120 14:05:18 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:17.120 14:05:18 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:17.120 14:05:18 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:17.121 14:05:18 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68666 ]] 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68666 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.JFeE59 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.JFeE59/tests/xnvme /tmp/spdk.JFeE59 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976694784 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591322624 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976694784 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591322624 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94969872384 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4732907520 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:17.122 * Looking for test storage... 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976694784 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:17.122 14:05:18 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:17.123 14:05:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:17.384 14:05:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:17.384 14:05:18 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.384 14:05:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.384 --rc genhtml_branch_coverage=1 00:12:17.384 --rc genhtml_function_coverage=1 00:12:17.384 --rc genhtml_legend=1 00:12:17.384 --rc geninfo_all_blocks=1 00:12:17.384 --rc geninfo_unexecuted_blocks=1 00:12:17.384 00:12:17.384 ' 00:12:17.384 14:05:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.384 --rc genhtml_branch_coverage=1 00:12:17.384 --rc genhtml_function_coverage=1 00:12:17.384 --rc genhtml_legend=1 00:12:17.384 --rc geninfo_all_blocks=1 00:12:17.384 --rc geninfo_unexecuted_blocks=1 00:12:17.384 00:12:17.384 ' 00:12:17.384 14:05:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.384 --rc genhtml_branch_coverage=1 00:12:17.384 --rc genhtml_function_coverage=1 00:12:17.384 --rc genhtml_legend=1 00:12:17.384 --rc geninfo_all_blocks=1 00:12:17.384 --rc geninfo_unexecuted_blocks=1 00:12:17.384 00:12:17.384 ' 00:12:17.384 14:05:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.384 --rc genhtml_branch_coverage=1 00:12:17.384 --rc genhtml_function_coverage=1 00:12:17.384 --rc genhtml_legend=1 00:12:17.384 --rc geninfo_all_blocks=1 00:12:17.384 --rc geninfo_unexecuted_blocks=1 00:12:17.384 00:12:17.384 ' 00:12:17.384 14:05:18 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.384 14:05:18 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.385 14:05:18 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.385 14:05:18 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.385 14:05:18 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.385 14:05:18 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:17.385 14:05:18 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:17.385 14:05:18 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:17.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:17.645 Waiting for block devices as requested 00:12:17.907 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.907 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.907 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.907 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.188 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:23.188 14:05:24 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:23.445 14:05:25 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:23.445 14:05:25 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:23.704 14:05:25 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:23.704 14:05:25 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:23.704 No valid GPT data, bailing 00:12:23.704 14:05:25 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:23.704 14:05:25 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:23.704 14:05:25 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:23.704 14:05:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:23.704 14:05:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.704 14:05:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.704 14:05:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.704 ************************************ 00:12:23.704 START TEST xnvme_rpc 00:12:23.704 ************************************ 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:23.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69065 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69065 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69065 ']' 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.704 14:05:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:23.704 [2024-12-09 14:05:25.373244] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:12:23.704 [2024-12-09 14:05:25.373361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69065 ] 00:12:24.021 [2024-12-09 14:05:25.524550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.021 [2024-12-09 14:05:25.622403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.618 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.618 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:24.618 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:24.618 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.618 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.618 xnvme_bdev 00:12:24.618 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.618 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:24.618 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.618 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69065 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69065 ']' 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69065 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69065 00:12:24.619 killing process with pid 69065 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69065' 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69065 00:12:24.619 14:05:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69065 00:12:26.520 00:12:26.520 real 0m2.589s 00:12:26.520 user 0m2.688s 00:12:26.520 sys 0m0.348s 00:12:26.520 ************************************ 00:12:26.520 END TEST xnvme_rpc 00:12:26.520 ************************************ 00:12:26.520 14:05:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.520 14:05:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.520 14:05:27 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:26.520 14:05:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.520 14:05:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.520 14:05:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.520 ************************************ 00:12:26.520 START TEST xnvme_bdevperf 00:12:26.520 ************************************ 00:12:26.520 14:05:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:26.520 14:05:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:26.520 14:05:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:26.520 14:05:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:26.520 14:05:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:26.520 14:05:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:26.520 14:05:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:26.520 14:05:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:26.520 { 00:12:26.520 "subsystems": [ 00:12:26.520 { 00:12:26.520 "subsystem": "bdev", 00:12:26.520 "config": [ 00:12:26.520 { 00:12:26.520 "params": { 00:12:26.520 "io_mechanism": "libaio", 00:12:26.520 "conserve_cpu": false, 00:12:26.520 "filename": "/dev/nvme0n1", 00:12:26.520 "name": "xnvme_bdev" 00:12:26.520 }, 00:12:26.520 "method": "bdev_xnvme_create" 00:12:26.520 }, 00:12:26.520 { 00:12:26.520 "method": "bdev_wait_for_examine" 00:12:26.520 } 00:12:26.520 ] 00:12:26.520 } 00:12:26.520 ] 00:12:26.520 } 00:12:26.520 [2024-12-09 14:05:27.992271] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:12:26.520 [2024-12-09 14:05:27.992383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69128 ] 00:12:26.520 [2024-12-09 14:05:28.148779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.520 [2024-12-09 14:05:28.246642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.778 Running I/O for 5 seconds... 00:12:29.080 28721.00 IOPS, 112.19 MiB/s [2024-12-09T14:05:31.808Z] 29331.50 IOPS, 114.58 MiB/s [2024-12-09T14:05:32.741Z] 29566.33 IOPS, 115.49 MiB/s [2024-12-09T14:05:33.675Z] 29261.00 IOPS, 114.30 MiB/s 00:12:31.881 Latency(us) 00:12:31.881 [2024-12-09T14:05:33.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.881 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:31.881 xnvme_bdev : 5.00 29263.01 114.31 0.00 0.00 2182.17 184.32 6704.84 00:12:31.881 [2024-12-09T14:05:33.675Z] =================================================================================================================== 00:12:31.881 [2024-12-09T14:05:33.675Z] Total : 29263.01 114.31 0.00 0.00 2182.17 184.32 6704.84 00:12:32.821 14:05:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:32.821 14:05:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:32.821 14:05:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:32.821 14:05:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:32.821 14:05:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:32.821 { 00:12:32.821 "subsystems": [ 00:12:32.821 { 00:12:32.821 "subsystem": "bdev", 00:12:32.821 "config": [ 00:12:32.821 { 00:12:32.821 "params": { 00:12:32.821 "io_mechanism": "libaio", 00:12:32.821 "conserve_cpu": false, 00:12:32.821 "filename": "/dev/nvme0n1", 00:12:32.821 "name": "xnvme_bdev" 00:12:32.821 }, 00:12:32.821 "method": "bdev_xnvme_create" 00:12:32.821 }, 00:12:32.821 { 00:12:32.821 "method": "bdev_wait_for_examine" 00:12:32.821 } 00:12:32.821 ] 00:12:32.821 } 00:12:32.821 ] 00:12:32.821 } 00:12:32.821 [2024-12-09 14:05:34.317829] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:12:32.822 [2024-12-09 14:05:34.317949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69203 ] 00:12:32.822 [2024-12-09 14:05:34.475483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.822 [2024-12-09 14:05:34.576831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.094 Running I/O for 5 seconds... 00:12:35.425 23355.00 IOPS, 91.23 MiB/s [2024-12-09T14:05:38.160Z] 23557.50 IOPS, 92.02 MiB/s [2024-12-09T14:05:39.103Z] 24367.00 IOPS, 95.18 MiB/s [2024-12-09T14:05:40.046Z] 24211.00 IOPS, 94.57 MiB/s 00:12:38.252 Latency(us) 00:12:38.252 [2024-12-09T14:05:40.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.252 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:38.253 xnvme_bdev : 5.00 24238.85 94.68 0.00 0.00 2634.46 212.68 9225.45 00:12:38.253 [2024-12-09T14:05:40.047Z] =================================================================================================================== 00:12:38.253 [2024-12-09T14:05:40.047Z] Total : 24238.85 94.68 0.00 0.00 2634.46 212.68 9225.45 00:12:39.196 ************************************ 00:12:39.196 END TEST xnvme_bdevperf 00:12:39.196 ************************************ 00:12:39.196 00:12:39.196 real 0m12.701s 00:12:39.196 user 0m4.297s 00:12:39.196 sys 0m6.467s 00:12:39.196 14:05:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.196 14:05:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:39.196 14:05:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:39.196 14:05:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.196 14:05:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.196 14:05:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.196 ************************************ 00:12:39.196 START TEST xnvme_fio_plugin 00:12:39.196 ************************************ 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:39.196 14:05:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.196 { 00:12:39.196 "subsystems": [ 00:12:39.196 { 00:12:39.196 "subsystem": "bdev", 00:12:39.196 "config": [ 00:12:39.196 { 00:12:39.196 "params": { 00:12:39.196 "io_mechanism": "libaio", 00:12:39.196 "conserve_cpu": false, 00:12:39.196 "filename": "/dev/nvme0n1", 00:12:39.196 "name": "xnvme_bdev" 00:12:39.196 }, 00:12:39.196 "method": "bdev_xnvme_create" 00:12:39.196 }, 00:12:39.196 { 00:12:39.196 "method": "bdev_wait_for_examine" 00:12:39.196 } 00:12:39.196 ] 00:12:39.196 } 00:12:39.196 ] 00:12:39.196 } 00:12:39.196 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:39.196 fio-3.35 00:12:39.196 Starting 1 thread 00:12:45.785 00:12:45.785 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69318: Mon Dec 9 14:05:46 2024 00:12:45.785 read: IOPS=22.3k, BW=87.2MiB/s (91.5MB/s)(436MiB/5001msec) 00:12:45.785 slat (usec): min=4, max=2438, avg=39.10, stdev=136.97 00:12:45.785 clat (usec): min=105, max=6260, avg=1795.62, stdev=832.30 00:12:45.785 lat (usec): min=199, max=6356, avg=1834.72, stdev=821.21 00:12:45.785 clat percentiles (usec): 00:12:45.785 | 1.00th=[ 265], 5.00th=[ 515], 10.00th=[ 717], 20.00th=[ 1045], 00:12:45.785 | 30.00th=[ 1303], 40.00th=[ 1532], 50.00th=[ 1762], 60.00th=[ 1991], 00:12:45.785 | 70.00th=[ 2212], 80.00th=[ 2507], 90.00th=[ 2868], 95.00th=[ 3228], 00:12:45.785 | 99.00th=[ 3982], 99.50th=[ 4293], 99.90th=[ 4948], 99.95th=[ 5211], 00:12:45.785 | 99.99th=[ 5866] 00:12:45.785 bw ( KiB/s): min=83720, max=94355, per=100.00%, avg=89676.78, stdev=3172.82, samples=9 00:12:45.785 iops : min=20930, max=23588, avg=22419.11, stdev=793.07, samples=9 00:12:45.785 lat (usec) : 250=0.84%, 500=3.84%, 750=6.19%, 1000=7.59% 00:12:45.785 lat (msec) : 2=42.25%, 4=38.31%, 10=0.98% 00:12:45.785 cpu : usr=23.58%, sys=67.52%, ctx=11686, majf=0, minf=764 00:12:45.785 IO depths : 1=0.2%, 2=0.7%, 4=2.9%, 8=9.2%, 16=25.1%, 32=59.9%, >=64=1.9% 00:12:45.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.785 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:12:45.785 issued rwts: total=111662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.785 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:45.785 00:12:45.785 Run status group 0 (all jobs): 00:12:45.785 READ: bw=87.2MiB/s (91.5MB/s), 87.2MiB/s-87.2MiB/s (91.5MB/s-91.5MB/s), io=436MiB (457MB), run=5001-5001msec 00:12:45.785 ----------------------------------------------------- 00:12:45.785 Suppressions used: 00:12:45.785 count bytes template 00:12:45.785 1 11 /usr/src/fio/parse.c 00:12:45.785 1 8 libtcmalloc_minimal.so 00:12:45.785 1 904 libcrypto.so 00:12:45.785 ----------------------------------------------------- 00:12:45.785 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:45.785 14:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:45.785 { 00:12:45.785 "subsystems": [ 00:12:45.785 { 00:12:45.785 "subsystem": "bdev", 00:12:45.785 "config": [ 00:12:45.785 { 00:12:45.785 "params": { 00:12:45.785 "io_mechanism": "libaio", 00:12:45.785 "conserve_cpu": false, 00:12:45.785 "filename": "/dev/nvme0n1", 00:12:45.785 "name": "xnvme_bdev" 00:12:45.785 }, 00:12:45.785 "method": "bdev_xnvme_create" 00:12:45.785 }, 00:12:45.785 { 00:12:45.785 "method": "bdev_wait_for_examine" 00:12:45.785 } 00:12:45.785 ] 00:12:45.785 } 00:12:45.785 ] 00:12:45.785 } 00:12:46.046 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:46.046 fio-3.35 00:12:46.046 Starting 1 thread 00:12:52.633 00:12:52.633 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69415: Mon Dec 9 14:05:53 2024 00:12:52.633 write: IOPS=23.7k, BW=92.5MiB/s (97.0MB/s)(463MiB/5001msec); 0 zone resets 00:12:52.633 slat (usec): min=4, max=2538, avg=37.14, stdev=118.71 00:12:52.633 clat (usec): min=86, max=5651, avg=1655.66, stdev=811.07 00:12:52.633 lat (usec): min=104, max=5726, avg=1692.79, stdev=803.50 00:12:52.633 clat percentiles (usec): 00:12:52.633 | 1.00th=[ 260], 5.00th=[ 449], 10.00th=[ 619], 20.00th=[ 906], 00:12:52.633 | 30.00th=[ 1156], 40.00th=[ 1401], 50.00th=[ 1614], 60.00th=[ 1827], 00:12:52.633 | 70.00th=[ 2057], 80.00th=[ 2311], 90.00th=[ 2704], 95.00th=[ 3064], 00:12:52.633 | 99.00th=[ 3818], 99.50th=[ 4080], 99.90th=[ 4686], 99.95th=[ 4883], 00:12:52.633 | 99.99th=[ 5211] 00:12:52.633 bw ( KiB/s): min=86048, max=109232, per=98.89%, avg=93670.22, stdev=7057.94, samples=9 00:12:52.633 iops : min=21512, max=27308, avg=23417.56, stdev=1764.49, samples=9 00:12:52.633 lat (usec) : 100=0.01%, 250=0.87%, 500=5.48%, 750=8.04%, 1000=9.33% 00:12:52.633 lat (msec) : 2=44.05%, 4=31.59%, 10=0.65% 00:12:52.633 cpu : usr=23.96%, sys=65.54%, ctx=15, majf=0, minf=765 00:12:52.633 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=10.0%, 16=25.5%, 32=58.5%, >=64=1.9% 00:12:52.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.633 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:12:52.633 issued rwts: total=0,118428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.633 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:52.633 00:12:52.633 Run status group 0 (all jobs): 00:12:52.633 WRITE: bw=92.5MiB/s (97.0MB/s), 92.5MiB/s-92.5MiB/s (97.0MB/s-97.0MB/s), io=463MiB (485MB), run=5001-5001msec 00:12:52.633 ----------------------------------------------------- 00:12:52.633 Suppressions used: 00:12:52.633 count bytes template 00:12:52.633 1 11 /usr/src/fio/parse.c 00:12:52.633 1 8 libtcmalloc_minimal.so 00:12:52.633 1 904 libcrypto.so 00:12:52.633 ----------------------------------------------------- 00:12:52.633 00:12:52.633 00:12:52.633 real 0m13.620s 00:12:52.633 user 0m5.128s 00:12:52.633 sys 0m7.134s 00:12:52.633 ************************************ 00:12:52.633 END TEST xnvme_fio_plugin 00:12:52.633 ************************************ 00:12:52.633 14:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.633 14:05:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:52.633 14:05:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:52.633 14:05:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:52.633 14:05:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:52.633 14:05:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:52.633 14:05:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:52.633 14:05:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.633 14:05:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.633 ************************************ 00:12:52.633 START TEST xnvme_rpc 00:12:52.633 ************************************ 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:52.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69496 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69496 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69496 ']' 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:52.633 14:05:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.894 [2024-12-09 14:05:54.447316] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:12:52.894 [2024-12-09 14:05:54.447605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69496 ] 00:12:52.894 [2024-12-09 14:05:54.605452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.155 [2024-12-09 14:05:54.703653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.780 xnvme_bdev 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.780 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69496 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69496 ']' 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69496 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69496 00:12:53.781 killing process with pid 69496 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69496' 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69496 00:12:53.781 14:05:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69496 00:12:55.696 00:12:55.696 real 0m2.603s 00:12:55.696 user 0m2.675s 00:12:55.696 sys 0m0.360s 00:12:55.696 14:05:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.696 ************************************ 00:12:55.696 END TEST xnvme_rpc 00:12:55.696 ************************************ 00:12:55.696 14:05:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.696 14:05:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:55.696 14:05:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.696 14:05:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.696 14:05:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.696 ************************************ 00:12:55.696 START TEST xnvme_bdevperf 00:12:55.696 ************************************ 00:12:55.696 14:05:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:55.696 14:05:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:55.696 14:05:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:55.696 14:05:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:55.696 14:05:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:55.696 14:05:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:55.696 14:05:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:55.696 14:05:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:55.696 { 00:12:55.696 "subsystems": [ 00:12:55.696 { 00:12:55.696 "subsystem": "bdev", 00:12:55.696 "config": [ 00:12:55.696 { 00:12:55.696 "params": { 00:12:55.696 "io_mechanism": "libaio", 00:12:55.696 "conserve_cpu": true, 00:12:55.696 "filename": "/dev/nvme0n1", 00:12:55.696 "name": "xnvme_bdev" 00:12:55.696 }, 00:12:55.696 "method": "bdev_xnvme_create" 00:12:55.696 }, 00:12:55.696 { 00:12:55.696 "method": "bdev_wait_for_examine" 00:12:55.696 } 00:12:55.696 ] 00:12:55.696 } 00:12:55.696 ] 00:12:55.696 } 00:12:55.696 [2024-12-09 14:05:57.107811] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:12:55.696 [2024-12-09 14:05:57.107926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69570 ] 00:12:55.696 [2024-12-09 14:05:57.268527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.696 [2024-12-09 14:05:57.363641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.957 Running I/O for 5 seconds... 00:12:57.845 21405.00 IOPS, 83.61 MiB/s [2024-12-09T14:06:01.027Z] 21604.00 IOPS, 84.39 MiB/s [2024-12-09T14:06:01.970Z] 21896.00 IOPS, 85.53 MiB/s [2024-12-09T14:06:02.911Z] 22640.75 IOPS, 88.44 MiB/s 00:13:01.117 Latency(us) 00:13:01.117 [2024-12-09T14:06:02.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.117 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:01.117 xnvme_bdev : 5.00 23219.13 90.70 0.00 0.00 2751.22 195.35 8670.92 00:13:01.117 [2024-12-09T14:06:02.911Z] =================================================================================================================== 00:13:01.117 [2024-12-09T14:06:02.911Z] Total : 23219.13 90.70 0.00 0.00 2751.22 195.35 8670.92 00:13:01.690 14:06:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:01.690 14:06:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:01.690 14:06:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:01.690 14:06:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.690 14:06:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.690 { 00:13:01.690 "subsystems": [ 00:13:01.690 { 00:13:01.690 "subsystem": "bdev", 00:13:01.690 "config": [ 00:13:01.690 { 00:13:01.690 "params": { 00:13:01.690 "io_mechanism": "libaio", 00:13:01.690 "conserve_cpu": true, 00:13:01.690 "filename": "/dev/nvme0n1", 00:13:01.690 "name": "xnvme_bdev" 00:13:01.690 }, 00:13:01.690 "method": "bdev_xnvme_create" 00:13:01.690 }, 00:13:01.690 { 00:13:01.690 "method": "bdev_wait_for_examine" 00:13:01.690 } 00:13:01.690 ] 00:13:01.690 } 00:13:01.690 ] 00:13:01.690 } 00:13:01.690 [2024-12-09 14:06:03.423137] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:01.690 [2024-12-09 14:06:03.423255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69646 ] 00:13:01.952 [2024-12-09 14:06:03.581610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.952 [2024-12-09 14:06:03.680992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.213 Running I/O for 5 seconds... 00:13:04.233 22270.00 IOPS, 86.99 MiB/s [2024-12-09T14:06:06.971Z] 21764.50 IOPS, 85.02 MiB/s [2024-12-09T14:06:08.364Z] 22242.67 IOPS, 86.89 MiB/s [2024-12-09T14:06:09.305Z] 18947.00 IOPS, 74.01 MiB/s [2024-12-09T14:06:09.305Z] 19743.00 IOPS, 77.12 MiB/s 00:13:07.511 Latency(us) 00:13:07.511 [2024-12-09T14:06:09.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.511 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:07.511 xnvme_bdev : 5.01 19717.95 77.02 0.00 0.00 3239.21 75.22 509769.26 00:13:07.511 [2024-12-09T14:06:09.305Z] =================================================================================================================== 00:13:07.511 [2024-12-09T14:06:09.305Z] Total : 19717.95 77.02 0.00 0.00 3239.21 75.22 509769.26 00:13:08.082 ************************************ 00:13:08.082 END TEST xnvme_bdevperf 00:13:08.082 ************************************ 00:13:08.082 00:13:08.082 real 0m12.639s 00:13:08.082 user 0m4.889s 00:13:08.082 sys 0m6.410s 00:13:08.082 14:06:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.082 14:06:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:08.082 14:06:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:08.082 14:06:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:08.082 14:06:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.082 14:06:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.082 ************************************ 00:13:08.082 START TEST xnvme_fio_plugin 00:13:08.082 ************************************ 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:08.082 14:06:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:08.082 { 00:13:08.082 "subsystems": [ 00:13:08.082 { 00:13:08.082 "subsystem": "bdev", 00:13:08.082 "config": [ 00:13:08.082 { 00:13:08.082 "params": { 00:13:08.082 "io_mechanism": "libaio", 00:13:08.082 "conserve_cpu": true, 00:13:08.082 "filename": "/dev/nvme0n1", 00:13:08.082 "name": "xnvme_bdev" 00:13:08.082 }, 00:13:08.082 "method": "bdev_xnvme_create" 00:13:08.082 }, 00:13:08.082 { 00:13:08.082 "method": "bdev_wait_for_examine" 00:13:08.082 } 00:13:08.082 ] 00:13:08.082 } 00:13:08.082 ] 00:13:08.082 } 00:13:08.342 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:08.342 fio-3.35 00:13:08.342 Starting 1 thread 00:13:14.931 00:13:14.931 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69760: Mon Dec 9 14:06:15 2024 00:13:14.931 read: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(451MiB/5001msec) 00:13:14.931 slat (usec): min=4, max=1975, avg=37.78, stdev=128.31 00:13:14.931 clat (usec): min=105, max=7005, avg=1721.35, stdev=781.45 00:13:14.931 lat (usec): min=183, max=7030, avg=1759.13, stdev=769.42 00:13:14.931 clat percentiles (usec): 00:13:14.931 | 1.00th=[ 258], 5.00th=[ 537], 10.00th=[ 709], 20.00th=[ 1020], 00:13:14.931 | 30.00th=[ 1270], 40.00th=[ 1483], 50.00th=[ 1696], 60.00th=[ 1876], 00:13:14.931 | 70.00th=[ 2089], 80.00th=[ 2376], 90.00th=[ 2737], 95.00th=[ 3064], 00:13:14.931 | 99.00th=[ 3752], 99.50th=[ 4047], 99.90th=[ 4752], 99.95th=[ 5080], 00:13:14.931 | 99.99th=[ 5735] 00:13:14.931 bw ( KiB/s): min=89128, max=97720, per=100.00%, avg=92518.22, stdev=2596.34, samples=9 00:13:14.931 iops : min=22282, max=24430, avg=23129.78, stdev=649.14, samples=9 00:13:14.931 lat (usec) : 250=0.90%, 500=3.35%, 750=6.91%, 1000=8.13% 00:13:14.931 lat (msec) : 2=47.18%, 4=32.97%, 10=0.57% 00:13:14.931 cpu : usr=23.48%, sys=68.34%, ctx=11251, majf=0, minf=764 00:13:14.931 IO depths : 1=0.2%, 2=0.9%, 4=3.2%, 8=9.3%, 16=24.6%, 32=59.9%, >=64=1.9% 00:13:14.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.931 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:13:14.931 issued rwts: total=115510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:14.931 00:13:14.931 Run status group 0 (all jobs): 00:13:14.931 READ: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=451MiB (473MB), run=5001-5001msec 00:13:14.931 ----------------------------------------------------- 00:13:14.931 Suppressions used: 00:13:14.931 count bytes template 00:13:14.931 1 11 /usr/src/fio/parse.c 00:13:14.931 1 8 libtcmalloc_minimal.so 00:13:14.931 1 904 libcrypto.so 00:13:14.931 ----------------------------------------------------- 00:13:14.931 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:14.931 14:06:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.931 { 00:13:14.931 "subsystems": [ 00:13:14.931 { 00:13:14.931 "subsystem": "bdev", 00:13:14.931 "config": [ 00:13:14.931 { 00:13:14.931 "params": { 00:13:14.931 "io_mechanism": "libaio", 00:13:14.931 "conserve_cpu": true, 00:13:14.931 "filename": "/dev/nvme0n1", 00:13:14.931 "name": "xnvme_bdev" 00:13:14.931 }, 00:13:14.931 "method": "bdev_xnvme_create" 00:13:14.931 }, 00:13:14.931 { 00:13:14.931 "method": "bdev_wait_for_examine" 00:13:14.931 } 00:13:14.931 ] 00:13:14.931 } 00:13:14.931 ] 00:13:14.931 } 00:13:14.931 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:14.931 fio-3.35 00:13:14.931 Starting 1 thread 00:13:21.513 00:13:21.513 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69846: Mon Dec 9 14:06:22 2024 00:13:21.513 write: IOPS=23.3k, BW=91.0MiB/s (95.4MB/s)(455MiB/5001msec); 0 zone resets 00:13:21.513 slat (usec): min=4, max=2228, avg=37.49, stdev=123.06 00:13:21.513 clat (usec): min=106, max=14052, avg=1697.56, stdev=791.70 00:13:21.513 lat (usec): min=190, max=14074, avg=1735.05, stdev=781.41 00:13:21.513 clat percentiles (usec): 00:13:21.513 | 1.00th=[ 251], 5.00th=[ 506], 10.00th=[ 676], 20.00th=[ 979], 00:13:21.513 | 30.00th=[ 1254], 40.00th=[ 1467], 50.00th=[ 1663], 60.00th=[ 1860], 00:13:21.513 | 70.00th=[ 2073], 80.00th=[ 2343], 90.00th=[ 2704], 95.00th=[ 3032], 00:13:21.513 | 99.00th=[ 3818], 99.50th=[ 4113], 99.90th=[ 4883], 99.95th=[ 5538], 00:13:21.513 | 99.99th=[ 9503] 00:13:21.513 bw ( KiB/s): min=88384, max=100808, per=100.00%, avg=93280.00, stdev=3668.98, samples=9 00:13:21.513 iops : min=22096, max=25202, avg=23320.00, stdev=917.24, samples=9 00:13:21.513 lat (usec) : 250=0.99%, 500=3.90%, 750=7.39%, 1000=8.32% 00:13:21.513 lat (msec) : 2=46.07%, 4=32.67%, 10=0.65%, 20=0.01% 00:13:21.513 cpu : usr=24.70%, sys=66.46%, ctx=12, majf=0, minf=765 00:13:21.513 IO depths : 1=0.2%, 2=0.8%, 4=3.3%, 8=9.7%, 16=25.3%, 32=58.8%, >=64=1.9% 00:13:21.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.513 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:13:21.513 issued rwts: total=0,116466,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.513 00:13:21.513 Run status group 0 (all jobs): 00:13:21.513 WRITE: bw=91.0MiB/s (95.4MB/s), 91.0MiB/s-91.0MiB/s (95.4MB/s-95.4MB/s), io=455MiB (477MB), run=5001-5001msec 00:13:21.775 ----------------------------------------------------- 00:13:21.775 Suppressions used: 00:13:21.775 count bytes template 00:13:21.775 1 11 /usr/src/fio/parse.c 00:13:21.775 1 8 libtcmalloc_minimal.so 00:13:21.775 1 904 libcrypto.so 00:13:21.775 ----------------------------------------------------- 00:13:21.775 00:13:21.775 ************************************ 00:13:21.775 END TEST xnvme_fio_plugin 00:13:21.775 ************************************ 00:13:21.775 00:13:21.775 real 0m13.577s 00:13:21.775 user 0m5.135s 00:13:21.775 sys 0m7.200s 00:13:21.775 14:06:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.775 14:06:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:21.775 14:06:23 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:21.775 14:06:23 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:21.775 14:06:23 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:21.775 14:06:23 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:21.775 14:06:23 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:21.775 14:06:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:21.775 14:06:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:21.775 14:06:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:21.775 14:06:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:21.775 14:06:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:21.775 14:06:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.775 14:06:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.775 ************************************ 00:13:21.775 START TEST xnvme_rpc 00:13:21.775 ************************************ 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69932 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69932 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69932 ']' 00:13:21.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.775 14:06:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.775 [2024-12-09 14:06:23.469369] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:21.775 [2024-12-09 14:06:23.469692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69932 ] 00:13:22.036 [2024-12-09 14:06:23.627342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.036 [2024-12-09 14:06:23.725862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.608 xnvme_bdev 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.608 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.869 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69932 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69932 ']' 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69932 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69932 00:13:22.870 killing process with pid 69932 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69932' 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69932 00:13:22.870 14:06:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69932 00:13:24.782 ************************************ 00:13:24.782 END TEST xnvme_rpc 00:13:24.782 ************************************ 00:13:24.782 00:13:24.782 real 0m2.709s 00:13:24.782 user 0m2.830s 00:13:24.782 sys 0m0.337s 00:13:24.782 14:06:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.782 14:06:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.782 14:06:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:24.782 14:06:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:24.782 14:06:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.782 14:06:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.782 ************************************ 00:13:24.782 START TEST xnvme_bdevperf 00:13:24.782 ************************************ 00:13:24.782 14:06:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:24.782 14:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:24.782 14:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:24.782 14:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:24.782 14:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:24.782 14:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:24.782 14:06:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:24.782 14:06:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:24.782 { 00:13:24.782 "subsystems": [ 00:13:24.782 { 00:13:24.782 "subsystem": "bdev", 00:13:24.782 "config": [ 00:13:24.782 { 00:13:24.782 "params": { 00:13:24.782 "io_mechanism": "io_uring", 00:13:24.782 "conserve_cpu": false, 00:13:24.782 "filename": "/dev/nvme0n1", 00:13:24.782 "name": "xnvme_bdev" 00:13:24.782 }, 00:13:24.782 "method": "bdev_xnvme_create" 00:13:24.782 }, 00:13:24.782 { 00:13:24.782 "method": "bdev_wait_for_examine" 00:13:24.782 } 00:13:24.782 ] 00:13:24.782 } 00:13:24.782 ] 00:13:24.782 } 00:13:24.782 [2024-12-09 14:06:26.250227] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:24.782 [2024-12-09 14:06:26.250380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70002 ] 00:13:24.782 [2024-12-09 14:06:26.421090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.782 [2024-12-09 14:06:26.516952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.043 Running I/O for 5 seconds... 00:13:27.370 28154.00 IOPS, 109.98 MiB/s [2024-12-09T14:06:30.107Z] 25761.00 IOPS, 100.63 MiB/s [2024-12-09T14:06:31.048Z] 26064.67 IOPS, 101.82 MiB/s [2024-12-09T14:06:32.043Z] 25160.75 IOPS, 98.28 MiB/s [2024-12-09T14:06:32.043Z] 25364.80 IOPS, 99.08 MiB/s 00:13:30.249 Latency(us) 00:13:30.249 [2024-12-09T14:06:32.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.249 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:30.249 xnvme_bdev : 5.01 25335.37 98.97 0.00 0.00 2520.91 463.16 8267.62 00:13:30.249 [2024-12-09T14:06:32.043Z] =================================================================================================================== 00:13:30.249 [2024-12-09T14:06:32.043Z] Total : 25335.37 98.97 0.00 0.00 2520.91 463.16 8267.62 00:13:30.821 14:06:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:30.821 14:06:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:30.821 14:06:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:30.821 14:06:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:30.821 14:06:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.097 { 00:13:31.097 "subsystems": [ 00:13:31.097 { 00:13:31.097 "subsystem": "bdev", 00:13:31.097 "config": [ 00:13:31.097 { 00:13:31.097 "params": { 00:13:31.097 "io_mechanism": "io_uring", 00:13:31.097 "conserve_cpu": false, 00:13:31.097 "filename": "/dev/nvme0n1", 00:13:31.097 "name": "xnvme_bdev" 00:13:31.097 }, 00:13:31.097 "method": "bdev_xnvme_create" 00:13:31.097 }, 00:13:31.097 { 00:13:31.097 "method": "bdev_wait_for_examine" 00:13:31.097 } 00:13:31.097 ] 00:13:31.097 } 00:13:31.097 ] 00:13:31.097 } 00:13:31.097 [2024-12-09 14:06:32.661116] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:31.097 [2024-12-09 14:06:32.661268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70076 ] 00:13:31.097 [2024-12-09 14:06:32.824039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.358 [2024-12-09 14:06:32.967434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.620 Running I/O for 5 seconds... 00:13:33.953 14829.00 IOPS, 57.93 MiB/s [2024-12-09T14:06:36.317Z] 14810.00 IOPS, 57.85 MiB/s [2024-12-09T14:06:37.690Z] 14748.00 IOPS, 57.61 MiB/s [2024-12-09T14:06:38.624Z] 17505.00 IOPS, 68.38 MiB/s [2024-12-09T14:06:38.624Z] 19065.80 IOPS, 74.48 MiB/s 00:13:36.830 Latency(us) 00:13:36.830 [2024-12-09T14:06:38.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.830 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:36.830 xnvme_bdev : 5.01 19057.19 74.44 0.00 0.00 3353.06 77.19 175838.13 00:13:36.830 [2024-12-09T14:06:38.624Z] =================================================================================================================== 00:13:36.830 [2024-12-09T14:06:38.624Z] Total : 19057.19 74.44 0.00 0.00 3353.06 77.19 175838.13 00:13:37.402 00:13:37.402 real 0m12.876s 00:13:37.402 user 0m5.565s 00:13:37.402 sys 0m6.963s 00:13:37.402 ************************************ 00:13:37.402 END TEST xnvme_bdevperf 00:13:37.402 ************************************ 00:13:37.403 14:06:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.403 14:06:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:37.403 14:06:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:37.403 14:06:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:37.403 14:06:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.403 14:06:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.403 ************************************ 00:13:37.403 START TEST xnvme_fio_plugin 00:13:37.403 ************************************ 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:37.403 14:06:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.403 { 00:13:37.403 "subsystems": [ 00:13:37.403 { 00:13:37.403 "subsystem": "bdev", 00:13:37.403 "config": [ 00:13:37.403 { 00:13:37.403 "params": { 00:13:37.403 "io_mechanism": "io_uring", 00:13:37.403 "conserve_cpu": false, 00:13:37.403 "filename": "/dev/nvme0n1", 00:13:37.403 "name": "xnvme_bdev" 00:13:37.403 }, 00:13:37.403 "method": "bdev_xnvme_create" 00:13:37.403 }, 00:13:37.403 { 00:13:37.403 "method": "bdev_wait_for_examine" 00:13:37.403 } 00:13:37.403 ] 00:13:37.403 } 00:13:37.403 ] 00:13:37.403 } 00:13:37.664 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:37.664 fio-3.35 00:13:37.664 Starting 1 thread 00:13:44.264 00:13:44.264 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70195: Mon Dec 9 14:06:44 2024 00:13:44.264 read: IOPS=22.1k, BW=86.3MiB/s (90.5MB/s)(432MiB/5002msec) 00:13:44.264 slat (usec): min=2, max=247, avg= 4.61, stdev= 2.54 00:13:44.264 clat (usec): min=738, max=8027, avg=2715.81, stdev=402.99 00:13:44.264 lat (usec): min=743, max=8055, avg=2720.42, stdev=403.37 00:13:44.264 clat percentiles (usec): 00:13:44.264 | 1.00th=[ 1795], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2442], 00:13:44.264 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2769], 00:13:44.264 | 70.00th=[ 2868], 80.00th=[ 2999], 90.00th=[ 3195], 95.00th=[ 3425], 00:13:44.264 | 99.00th=[ 3785], 99.50th=[ 3949], 99.90th=[ 4293], 99.95th=[ 7570], 00:13:44.264 | 99.99th=[ 7963] 00:13:44.264 bw ( KiB/s): min=83928, max=99248, per=100.00%, avg=88638.22, stdev=4396.13, samples=9 00:13:44.264 iops : min=20982, max=24812, avg=22159.56, stdev=1099.03, samples=9 00:13:44.264 lat (usec) : 750=0.01%, 1000=0.01% 00:13:44.264 lat (msec) : 2=2.86%, 4=96.78%, 10=0.35% 00:13:44.264 cpu : usr=30.59%, sys=66.37%, ctx=2181, majf=0, minf=762 00:13:44.264 IO depths : 1=1.4%, 2=2.9%, 4=5.9%, 8=11.7%, 16=23.7%, 32=52.7%, >=64=1.7% 00:13:44.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.264 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:44.264 issued rwts: total=110560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:44.264 00:13:44.264 Run status group 0 (all jobs): 00:13:44.264 READ: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=432MiB (453MB), run=5002-5002msec 00:13:44.264 ----------------------------------------------------- 00:13:44.264 Suppressions used: 00:13:44.264 count bytes template 00:13:44.264 1 11 /usr/src/fio/parse.c 00:13:44.264 1 8 libtcmalloc_minimal.so 00:13:44.264 1 904 libcrypto.so 00:13:44.264 ----------------------------------------------------- 00:13:44.264 00:13:44.264 14:06:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.264 14:06:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.264 14:06:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:44.264 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.265 14:06:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:44.265 14:06:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:44.265 14:06:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:44.265 14:06:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:44.265 14:06:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:44.265 14:06:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.265 { 00:13:44.265 "subsystems": [ 00:13:44.265 { 00:13:44.265 "subsystem": "bdev", 00:13:44.265 "config": [ 00:13:44.265 { 00:13:44.265 "params": { 00:13:44.265 "io_mechanism": "io_uring", 00:13:44.265 "conserve_cpu": false, 00:13:44.265 "filename": "/dev/nvme0n1", 00:13:44.265 "name": "xnvme_bdev" 00:13:44.265 }, 00:13:44.265 "method": "bdev_xnvme_create" 00:13:44.265 }, 00:13:44.265 { 00:13:44.265 "method": "bdev_wait_for_examine" 00:13:44.265 } 00:13:44.265 ] 00:13:44.265 } 00:13:44.265 ] 00:13:44.265 } 00:13:44.525 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:44.525 fio-3.35 00:13:44.525 Starting 1 thread 00:13:51.106 00:13:51.106 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70287: Mon Dec 9 14:06:51 2024 00:13:51.106 write: IOPS=22.0k, BW=85.9MiB/s (90.1MB/s)(430MiB/5002msec); 0 zone resets 00:13:51.106 slat (usec): min=2, max=180, avg= 4.96, stdev= 2.72 00:13:51.106 clat (usec): min=653, max=19757, avg=2709.21, stdev=510.36 00:13:51.106 lat (usec): min=658, max=19762, avg=2714.18, stdev=510.64 00:13:51.106 clat percentiles (usec): 00:13:51.106 | 1.00th=[ 1942], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:13:51.106 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2769], 00:13:51.106 | 70.00th=[ 2868], 80.00th=[ 2999], 90.00th=[ 3195], 95.00th=[ 3392], 00:13:51.106 | 99.00th=[ 3785], 99.50th=[ 4015], 99.90th=[ 4490], 99.95th=[11600], 00:13:51.106 | 99.99th=[19530] 00:13:51.106 bw ( KiB/s): min=79872, max=96632, per=100.00%, avg=88608.00, stdev=4247.63, samples=9 00:13:51.106 iops : min=19968, max=24158, avg=22152.00, stdev=1061.91, samples=9 00:13:51.106 lat (usec) : 750=0.01%, 1000=0.01% 00:13:51.106 lat (msec) : 2=1.71%, 4=97.75%, 10=0.48%, 20=0.05% 00:13:51.106 cpu : usr=33.09%, sys=65.69%, ctx=28, majf=0, minf=763 00:13:51.106 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:51.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.106 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:51.106 issued rwts: total=0,110051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.106 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:51.106 00:13:51.106 Run status group 0 (all jobs): 00:13:51.106 WRITE: bw=85.9MiB/s (90.1MB/s), 85.9MiB/s-85.9MiB/s (90.1MB/s-90.1MB/s), io=430MiB (451MB), run=5002-5002msec 00:13:51.366 ----------------------------------------------------- 00:13:51.366 Suppressions used: 00:13:51.366 count bytes template 00:13:51.366 1 11 /usr/src/fio/parse.c 00:13:51.366 1 8 libtcmalloc_minimal.so 00:13:51.366 1 904 libcrypto.so 00:13:51.366 ----------------------------------------------------- 00:13:51.366 00:13:51.366 ************************************ 00:13:51.366 END TEST xnvme_fio_plugin 00:13:51.366 ************************************ 00:13:51.366 00:13:51.366 real 0m13.877s 00:13:51.366 user 0m6.604s 00:13:51.366 sys 0m6.720s 00:13:51.366 14:06:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.366 14:06:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.366 14:06:53 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:51.366 14:06:53 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:51.366 14:06:53 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:51.366 14:06:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:51.366 14:06:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.366 14:06:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.366 14:06:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.366 ************************************ 00:13:51.366 START TEST xnvme_rpc 00:13:51.366 ************************************ 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:51.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70373 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70373 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70373 ']' 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.366 14:06:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.626 [2024-12-09 14:06:53.170949] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:51.626 [2024-12-09 14:06:53.171130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70373 ] 00:13:51.626 [2024-12-09 14:06:53.347733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.886 [2024-12-09 14:06:53.443777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.458 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.459 xnvme_bdev 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70373 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70373 ']' 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70373 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70373 00:13:52.459 killing process with pid 70373 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70373' 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70373 00:13:52.459 14:06:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70373 00:13:54.375 ************************************ 00:13:54.375 END TEST xnvme_rpc 00:13:54.375 ************************************ 00:13:54.375 00:13:54.375 real 0m2.657s 00:13:54.375 user 0m2.780s 00:13:54.375 sys 0m0.389s 00:13:54.375 14:06:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.375 14:06:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.375 14:06:55 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:54.375 14:06:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:54.375 14:06:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.375 14:06:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.375 ************************************ 00:13:54.375 START TEST xnvme_bdevperf 00:13:54.375 ************************************ 00:13:54.375 14:06:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:54.375 14:06:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:54.375 14:06:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:54.375 14:06:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:54.376 14:06:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:54.376 14:06:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:54.376 14:06:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:54.376 14:06:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:54.376 { 00:13:54.376 "subsystems": [ 00:13:54.376 { 00:13:54.376 "subsystem": "bdev", 00:13:54.376 "config": [ 00:13:54.376 { 00:13:54.376 "params": { 00:13:54.376 "io_mechanism": "io_uring", 00:13:54.376 "conserve_cpu": true, 00:13:54.376 "filename": "/dev/nvme0n1", 00:13:54.376 "name": "xnvme_bdev" 00:13:54.376 }, 00:13:54.376 "method": "bdev_xnvme_create" 00:13:54.376 }, 00:13:54.376 { 00:13:54.376 "method": "bdev_wait_for_examine" 00:13:54.376 } 00:13:54.376 ] 00:13:54.376 } 00:13:54.376 ] 00:13:54.376 } 00:13:54.376 [2024-12-09 14:06:55.830471] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:54.376 [2024-12-09 14:06:55.830606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70442 ] 00:13:54.376 [2024-12-09 14:06:55.990437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.376 [2024-12-09 14:06:56.089562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.637 Running I/O for 5 seconds... 00:13:56.959 28183.00 IOPS, 110.09 MiB/s [2024-12-09T14:06:59.685Z] 30976.50 IOPS, 121.00 MiB/s [2024-12-09T14:07:00.618Z] 33948.67 IOPS, 132.61 MiB/s [2024-12-09T14:07:01.558Z] 35122.25 IOPS, 137.20 MiB/s [2024-12-09T14:07:01.558Z] 34379.60 IOPS, 134.30 MiB/s 00:13:59.764 Latency(us) 00:13:59.764 [2024-12-09T14:07:01.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.764 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:59.764 xnvme_bdev : 5.01 34314.68 134.04 0.00 0.00 1860.63 1046.06 8822.15 00:13:59.764 [2024-12-09T14:07:01.558Z] =================================================================================================================== 00:13:59.764 [2024-12-09T14:07:01.558Z] Total : 34314.68 134.04 0.00 0.00 1860.63 1046.06 8822.15 00:14:00.703 14:07:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.703 14:07:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:00.703 14:07:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:00.703 14:07:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:00.703 14:07:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.703 { 00:14:00.703 "subsystems": [ 00:14:00.703 { 00:14:00.703 "subsystem": "bdev", 00:14:00.703 "config": [ 00:14:00.703 { 00:14:00.703 "params": { 00:14:00.703 "io_mechanism": "io_uring", 00:14:00.703 "conserve_cpu": true, 00:14:00.703 "filename": "/dev/nvme0n1", 00:14:00.703 "name": "xnvme_bdev" 00:14:00.703 }, 00:14:00.703 "method": "bdev_xnvme_create" 00:14:00.703 }, 00:14:00.703 { 00:14:00.703 "method": "bdev_wait_for_examine" 00:14:00.703 } 00:14:00.703 ] 00:14:00.703 } 00:14:00.703 ] 00:14:00.703 } 00:14:00.703 [2024-12-09 14:07:02.238137] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:14:00.703 [2024-12-09 14:07:02.238276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70517 ] 00:14:00.703 [2024-12-09 14:07:02.403850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.964 [2024-12-09 14:07:02.527557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.224 Running I/O for 5 seconds... 00:14:03.112 18134.00 IOPS, 70.84 MiB/s [2024-12-09T14:07:05.974Z] 18532.00 IOPS, 72.39 MiB/s [2024-12-09T14:07:06.916Z] 18480.00 IOPS, 72.19 MiB/s [2024-12-09T14:07:07.857Z] 18089.00 IOPS, 70.66 MiB/s [2024-12-09T14:07:07.857Z] 18163.20 IOPS, 70.95 MiB/s 00:14:06.063 Latency(us) 00:14:06.063 [2024-12-09T14:07:07.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.063 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:06.063 xnvme_bdev : 5.00 18160.03 70.94 0.00 0.00 3518.24 101.61 73400.32 00:14:06.063 [2024-12-09T14:07:07.857Z] =================================================================================================================== 00:14:06.063 [2024-12-09T14:07:07.857Z] Total : 18160.03 70.94 0.00 0.00 3518.24 101.61 73400.32 00:14:07.005 00:14:07.005 real 0m12.862s 00:14:07.005 user 0m7.510s 00:14:07.005 sys 0m4.187s 00:14:07.005 14:07:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.005 14:07:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.005 ************************************ 00:14:07.005 END TEST xnvme_bdevperf 00:14:07.005 ************************************ 00:14:07.005 14:07:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:07.005 14:07:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:07.005 14:07:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.005 14:07:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.005 ************************************ 00:14:07.005 START TEST xnvme_fio_plugin 00:14:07.005 ************************************ 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:07.005 14:07:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.005 { 00:14:07.005 "subsystems": [ 00:14:07.005 { 00:14:07.005 "subsystem": "bdev", 00:14:07.005 "config": [ 00:14:07.005 { 00:14:07.005 "params": { 00:14:07.005 "io_mechanism": "io_uring", 00:14:07.005 "conserve_cpu": true, 00:14:07.005 "filename": "/dev/nvme0n1", 00:14:07.005 "name": "xnvme_bdev" 00:14:07.005 }, 00:14:07.005 "method": "bdev_xnvme_create" 00:14:07.005 }, 00:14:07.005 { 00:14:07.005 "method": "bdev_wait_for_examine" 00:14:07.005 } 00:14:07.005 ] 00:14:07.005 } 00:14:07.005 ] 00:14:07.005 } 00:14:07.266 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:07.266 fio-3.35 00:14:07.266 Starting 1 thread 00:14:13.856 00:14:13.856 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70636: Mon Dec 9 14:07:14 2024 00:14:13.856 read: IOPS=27.3k, BW=107MiB/s (112MB/s)(534MiB/5002msec) 00:14:13.856 slat (nsec): min=2870, max=70794, avg=3761.07, stdev=1809.77 00:14:13.856 clat (usec): min=1403, max=6714, avg=2191.54, stdev=349.31 00:14:13.856 lat (usec): min=1406, max=6729, avg=2195.30, stdev=349.74 00:14:13.856 clat percentiles (usec): 00:14:13.856 | 1.00th=[ 1631], 5.00th=[ 1745], 10.00th=[ 1811], 20.00th=[ 1893], 00:14:13.856 | 30.00th=[ 1975], 40.00th=[ 2057], 50.00th=[ 2147], 60.00th=[ 2212], 00:14:13.856 | 70.00th=[ 2343], 80.00th=[ 2442], 90.00th=[ 2638], 95.00th=[ 2835], 00:14:13.856 | 99.00th=[ 3130], 99.50th=[ 3261], 99.90th=[ 4113], 99.95th=[ 4686], 00:14:13.856 | 99.99th=[ 6587] 00:14:13.856 bw ( KiB/s): min=107305, max=111616, per=100.00%, avg=109505.89, stdev=1735.41, samples=9 00:14:13.856 iops : min=26826, max=27904, avg=27376.44, stdev=433.89, samples=9 00:14:13.856 lat (msec) : 2=32.84%, 4=67.06%, 10=0.11% 00:14:13.856 cpu : usr=59.21%, sys=37.35%, ctx=2249, majf=0, minf=762 00:14:13.856 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:13.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.856 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:13.856 issued rwts: total=136783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:13.856 00:14:13.856 Run status group 0 (all jobs): 00:14:13.856 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=534MiB (560MB), run=5002-5002msec 00:14:13.856 ----------------------------------------------------- 00:14:13.856 Suppressions used: 00:14:13.856 count bytes template 00:14:13.856 1 11 /usr/src/fio/parse.c 00:14:13.856 1 8 libtcmalloc_minimal.so 00:14:13.856 1 904 libcrypto.so 00:14:13.856 ----------------------------------------------------- 00:14:13.856 00:14:13.856 14:07:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:13.857 14:07:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.857 { 00:14:13.857 "subsystems": [ 00:14:13.857 { 00:14:13.857 "subsystem": "bdev", 00:14:13.857 "config": [ 00:14:13.857 { 00:14:13.857 "params": { 00:14:13.857 "io_mechanism": "io_uring", 00:14:13.857 "conserve_cpu": true, 00:14:13.857 "filename": "/dev/nvme0n1", 00:14:13.857 "name": "xnvme_bdev" 00:14:13.857 }, 00:14:13.857 "method": "bdev_xnvme_create" 00:14:13.857 }, 00:14:13.857 { 00:14:13.857 "method": "bdev_wait_for_examine" 00:14:13.857 } 00:14:13.857 ] 00:14:13.857 } 00:14:13.857 ] 00:14:13.857 } 00:14:14.117 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:14.117 fio-3.35 00:14:14.117 Starting 1 thread 00:14:20.728 00:14:20.728 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70728: Mon Dec 9 14:07:21 2024 00:14:20.728 write: IOPS=24.8k, BW=96.7MiB/s (101MB/s)(484MiB/5003msec); 0 zone resets 00:14:20.728 slat (nsec): min=2895, max=92788, avg=4014.82, stdev=2035.19 00:14:20.728 clat (usec): min=98, max=169712, avg=2427.02, stdev=6441.13 00:14:20.728 lat (usec): min=104, max=169716, avg=2431.04, stdev=6441.14 00:14:20.728 clat percentiles (usec): 00:14:20.728 | 1.00th=[ 1565], 5.00th=[ 1663], 10.00th=[ 1729], 20.00th=[ 1811], 00:14:20.728 | 30.00th=[ 1893], 40.00th=[ 1975], 50.00th=[ 2040], 60.00th=[ 2114], 00:14:20.728 | 70.00th=[ 2212], 80.00th=[ 2343], 90.00th=[ 2540], 95.00th=[ 2769], 00:14:20.728 | 99.00th=[ 3359], 99.50th=[ 4621], 99.90th=[133694], 99.95th=[162530], 00:14:20.728 | 99.99th=[166724] 00:14:20.728 bw ( KiB/s): min=48560, max=117760, per=100.00%, avg=105735.11, stdev=21620.10, samples=9 00:14:20.728 iops : min=12140, max=29440, avg=26433.78, stdev=5405.03, samples=9 00:14:20.728 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.06% 00:14:20.728 lat (msec) : 2=44.26%, 4=55.04%, 10=0.27%, 20=0.03%, 50=0.03% 00:14:20.728 lat (msec) : 100=0.05%, 250=0.21% 00:14:20.728 cpu : usr=67.97%, sys=28.49%, ctx=18, majf=0, minf=763 00:14:20.728 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=24.9%, 32=50.4%, >=64=1.7% 00:14:20.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.728 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:20.728 issued rwts: total=0,123894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:20.728 00:14:20.728 Run status group 0 (all jobs): 00:14:20.728 WRITE: bw=96.7MiB/s (101MB/s), 96.7MiB/s-96.7MiB/s (101MB/s-101MB/s), io=484MiB (507MB), run=5003-5003msec 00:14:20.728 ----------------------------------------------------- 00:14:20.728 Suppressions used: 00:14:20.728 count bytes template 00:14:20.728 1 11 /usr/src/fio/parse.c 00:14:20.729 1 8 libtcmalloc_minimal.so 00:14:20.729 1 904 libcrypto.so 00:14:20.729 ----------------------------------------------------- 00:14:20.729 00:14:20.729 ************************************ 00:14:20.729 END TEST xnvme_fio_plugin 00:14:20.729 ************************************ 00:14:20.729 00:14:20.729 real 0m13.602s 00:14:20.729 user 0m9.179s 00:14:20.729 sys 0m3.740s 00:14:20.729 14:07:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.729 14:07:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:20.729 14:07:22 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:20.729 14:07:22 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:20.729 14:07:22 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:20.729 14:07:22 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:20.729 14:07:22 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:20.729 14:07:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:20.729 14:07:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:20.729 14:07:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:20.729 14:07:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:20.729 14:07:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:20.729 14:07:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.729 14:07:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.729 ************************************ 00:14:20.729 START TEST xnvme_rpc 00:14:20.729 ************************************ 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70809 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70809 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70809 ']' 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:20.729 14:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.729 [2024-12-09 14:07:22.435098] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:14:20.729 [2024-12-09 14:07:22.435219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70809 ] 00:14:20.990 [2024-12-09 14:07:22.592987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.990 [2024-12-09 14:07:22.690612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.562 xnvme_bdev 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.562 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70809 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70809 ']' 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70809 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70809 00:14:21.822 killing process with pid 70809 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70809' 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70809 00:14:21.822 14:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70809 00:14:23.195 00:14:23.195 real 0m2.588s 00:14:23.195 user 0m2.694s 00:14:23.195 sys 0m0.349s 00:14:23.195 ************************************ 00:14:23.195 END TEST xnvme_rpc 00:14:23.195 ************************************ 00:14:23.195 14:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.195 14:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.195 14:07:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:23.195 14:07:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:23.195 14:07:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.195 14:07:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.454 ************************************ 00:14:23.454 START TEST xnvme_bdevperf 00:14:23.454 ************************************ 00:14:23.454 14:07:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:23.454 14:07:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:23.454 14:07:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:23.454 14:07:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:23.454 14:07:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:23.454 14:07:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:23.454 14:07:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:23.454 14:07:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:23.454 { 00:14:23.454 "subsystems": [ 00:14:23.454 { 00:14:23.454 "subsystem": "bdev", 00:14:23.454 "config": [ 00:14:23.454 { 00:14:23.454 "params": { 00:14:23.454 "io_mechanism": "io_uring_cmd", 00:14:23.454 "conserve_cpu": false, 00:14:23.454 "filename": "/dev/ng0n1", 00:14:23.454 "name": "xnvme_bdev" 00:14:23.454 }, 00:14:23.454 "method": "bdev_xnvme_create" 00:14:23.454 }, 00:14:23.454 { 00:14:23.454 "method": "bdev_wait_for_examine" 00:14:23.454 } 00:14:23.454 ] 00:14:23.454 } 00:14:23.454 ] 00:14:23.454 } 00:14:23.454 [2024-12-09 14:07:25.059726] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:14:23.454 [2024-12-09 14:07:25.059880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70877 ] 00:14:23.454 [2024-12-09 14:07:25.219630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.713 [2024-12-09 14:07:25.313803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.970 Running I/O for 5 seconds... 00:14:25.883 61432.00 IOPS, 239.97 MiB/s [2024-12-09T14:07:28.610Z] 60777.00 IOPS, 237.41 MiB/s [2024-12-09T14:07:29.983Z] 61367.33 IOPS, 239.72 MiB/s [2024-12-09T14:07:30.916Z] 62025.25 IOPS, 242.29 MiB/s [2024-12-09T14:07:30.916Z] 61393.60 IOPS, 239.82 MiB/s 00:14:29.122 Latency(us) 00:14:29.122 [2024-12-09T14:07:30.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.122 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:29.122 xnvme_bdev : 5.00 61348.59 239.64 0.00 0.00 1039.25 367.06 4159.02 00:14:29.122 [2024-12-09T14:07:30.916Z] =================================================================================================================== 00:14:29.122 [2024-12-09T14:07:30.916Z] Total : 61348.59 239.64 0.00 0.00 1039.25 367.06 4159.02 00:14:29.689 14:07:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:29.689 14:07:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:29.689 14:07:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:29.689 14:07:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:29.689 14:07:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:29.689 { 00:14:29.689 "subsystems": [ 00:14:29.689 { 00:14:29.689 "subsystem": "bdev", 00:14:29.689 "config": [ 00:14:29.689 { 00:14:29.689 "params": { 00:14:29.689 "io_mechanism": "io_uring_cmd", 00:14:29.689 "conserve_cpu": false, 00:14:29.689 "filename": "/dev/ng0n1", 00:14:29.689 "name": "xnvme_bdev" 00:14:29.689 }, 00:14:29.689 "method": "bdev_xnvme_create" 00:14:29.689 }, 00:14:29.689 { 00:14:29.689 "method": "bdev_wait_for_examine" 00:14:29.689 } 00:14:29.689 ] 00:14:29.689 } 00:14:29.689 ] 00:14:29.689 } 00:14:29.689 [2024-12-09 14:07:31.338645] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:14:29.689 [2024-12-09 14:07:31.338756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70957 ] 00:14:29.947 [2024-12-09 14:07:31.499379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.947 [2024-12-09 14:07:31.593894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.205 Running I/O for 5 seconds... 00:14:32.070 45424.00 IOPS, 177.44 MiB/s [2024-12-09T14:07:35.241Z] 49888.50 IOPS, 194.88 MiB/s [2024-12-09T14:07:36.185Z] 51629.33 IOPS, 201.68 MiB/s [2024-12-09T14:07:37.127Z] 47027.25 IOPS, 183.70 MiB/s [2024-12-09T14:07:37.127Z] 43351.20 IOPS, 169.34 MiB/s 00:14:35.333 Latency(us) 00:14:35.333 [2024-12-09T14:07:37.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.333 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:35.333 xnvme_bdev : 5.22 41546.50 162.29 0.00 0.00 1489.49 53.56 350063.06 00:14:35.333 [2024-12-09T14:07:37.127Z] =================================================================================================================== 00:14:35.333 [2024-12-09T14:07:37.127Z] Total : 41546.50 162.29 0.00 0.00 1489.49 53.56 350063.06 00:14:36.275 14:07:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:36.275 14:07:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:36.275 14:07:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:36.275 14:07:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:36.275 14:07:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:36.275 { 00:14:36.275 "subsystems": [ 00:14:36.275 { 00:14:36.275 "subsystem": "bdev", 00:14:36.275 "config": [ 00:14:36.275 { 00:14:36.275 "params": { 00:14:36.275 "io_mechanism": "io_uring_cmd", 00:14:36.275 "conserve_cpu": false, 00:14:36.275 "filename": "/dev/ng0n1", 00:14:36.275 "name": "xnvme_bdev" 00:14:36.275 }, 00:14:36.275 "method": "bdev_xnvme_create" 00:14:36.275 }, 00:14:36.275 { 00:14:36.275 "method": "bdev_wait_for_examine" 00:14:36.275 } 00:14:36.275 ] 00:14:36.275 } 00:14:36.275 ] 00:14:36.275 } 00:14:36.275 [2024-12-09 14:07:37.835324] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:14:36.275 [2024-12-09 14:07:37.835441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71030 ] 00:14:36.275 [2024-12-09 14:07:37.995499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.536 [2024-12-09 14:07:38.090231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.797 Running I/O for 5 seconds... 00:14:38.683 48000.00 IOPS, 187.50 MiB/s [2024-12-09T14:07:41.430Z] 45399.00 IOPS, 177.34 MiB/s [2024-12-09T14:07:42.378Z] 44751.67 IOPS, 174.81 MiB/s [2024-12-09T14:07:43.764Z] 43979.25 IOPS, 171.79 MiB/s [2024-12-09T14:07:43.764Z] 43416.20 IOPS, 169.59 MiB/s 00:14:41.970 Latency(us) 00:14:41.970 [2024-12-09T14:07:43.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.970 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:41.970 xnvme_bdev : 5.00 43409.44 169.57 0.00 0.00 1470.86 385.97 237139.50 00:14:41.970 [2024-12-09T14:07:43.764Z] =================================================================================================================== 00:14:41.970 [2024-12-09T14:07:43.764Z] Total : 43409.44 169.57 0.00 0.00 1470.86 385.97 237139.50 00:14:42.541 14:07:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:42.541 14:07:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:42.541 14:07:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:42.541 14:07:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:42.541 14:07:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:42.541 { 00:14:42.541 "subsystems": [ 00:14:42.541 { 00:14:42.541 "subsystem": "bdev", 00:14:42.541 "config": [ 00:14:42.541 { 00:14:42.541 "params": { 00:14:42.541 "io_mechanism": "io_uring_cmd", 00:14:42.541 "conserve_cpu": false, 00:14:42.541 "filename": "/dev/ng0n1", 00:14:42.541 "name": "xnvme_bdev" 00:14:42.541 }, 00:14:42.541 "method": "bdev_xnvme_create" 00:14:42.541 }, 00:14:42.541 { 00:14:42.541 "method": "bdev_wait_for_examine" 00:14:42.541 } 00:14:42.541 ] 00:14:42.541 } 00:14:42.541 ] 00:14:42.541 } 00:14:42.541 [2024-12-09 14:07:44.124073] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:14:42.541 [2024-12-09 14:07:44.124213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71100 ] 00:14:42.541 [2024-12-09 14:07:44.288661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.802 [2024-12-09 14:07:44.416181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.061 Running I/O for 5 seconds... 00:14:44.943 132.00 IOPS, 0.52 MiB/s [2024-12-09T14:07:48.121Z] 161.00 IOPS, 0.63 MiB/s [2024-12-09T14:07:49.061Z] 158.33 IOPS, 0.62 MiB/s [2024-12-09T14:07:50.047Z] 156.00 IOPS, 0.61 MiB/s [2024-12-09T14:07:50.047Z] 172.60 IOPS, 0.67 MiB/s 00:14:48.253 Latency(us) 00:14:48.253 [2024-12-09T14:07:50.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.253 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:48.253 xnvme_bdev : 5.16 179.83 0.70 0.00 0.00 353316.70 441.11 967916.31 00:14:48.253 [2024-12-09T14:07:50.047Z] =================================================================================================================== 00:14:48.253 [2024-12-09T14:07:50.047Z] Total : 179.83 0.70 0.00 0.00 353316.70 441.11 967916.31 00:14:49.197 00:14:49.197 real 0m25.654s 00:14:49.197 user 0m13.870s 00:14:49.197 sys 0m11.119s 00:14:49.197 14:07:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.197 14:07:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 ************************************ 00:14:49.197 END TEST xnvme_bdevperf 00:14:49.197 ************************************ 00:14:49.197 14:07:50 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:49.197 14:07:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:49.197 14:07:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.197 14:07:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 ************************************ 00:14:49.197 START TEST xnvme_fio_plugin 00:14:49.197 ************************************ 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:49.197 14:07:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:49.197 { 00:14:49.197 "subsystems": [ 00:14:49.197 { 00:14:49.197 "subsystem": "bdev", 00:14:49.197 "config": [ 00:14:49.197 { 00:14:49.197 "params": { 00:14:49.197 "io_mechanism": "io_uring_cmd", 00:14:49.197 "conserve_cpu": false, 00:14:49.197 "filename": "/dev/ng0n1", 00:14:49.197 "name": "xnvme_bdev" 00:14:49.197 }, 00:14:49.197 "method": "bdev_xnvme_create" 00:14:49.197 }, 00:14:49.197 { 00:14:49.197 "method": "bdev_wait_for_examine" 00:14:49.197 } 00:14:49.197 ] 00:14:49.197 } 00:14:49.197 ] 00:14:49.198 } 00:14:49.198 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:49.198 fio-3.35 00:14:49.198 Starting 1 thread 00:14:55.785 00:14:55.785 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71218: Mon Dec 9 14:07:56 2024 00:14:55.785 read: IOPS=36.7k, BW=143MiB/s (150MB/s)(717MiB/5002msec) 00:14:55.785 slat (usec): min=2, max=151, avg= 4.04, stdev= 2.35 00:14:55.785 clat (usec): min=393, max=6230, avg=1579.74, stdev=314.97 00:14:55.785 lat (usec): min=396, max=6233, avg=1583.79, stdev=315.44 00:14:55.785 clat percentiles (usec): 00:14:55.785 | 1.00th=[ 889], 5.00th=[ 1090], 10.00th=[ 1205], 20.00th=[ 1352], 00:14:55.785 | 30.00th=[ 1434], 40.00th=[ 1500], 50.00th=[ 1565], 60.00th=[ 1631], 00:14:55.785 | 70.00th=[ 1713], 80.00th=[ 1811], 90.00th=[ 1958], 95.00th=[ 2089], 00:14:55.785 | 99.00th=[ 2442], 99.50th=[ 2704], 99.90th=[ 3589], 99.95th=[ 3687], 00:14:55.785 | 99.99th=[ 4113] 00:14:55.785 bw ( KiB/s): min=138712, max=178176, per=100.00%, avg=147875.56, stdev=12183.61, samples=9 00:14:55.786 iops : min=34678, max=44544, avg=36968.89, stdev=3045.90, samples=9 00:14:55.786 lat (usec) : 500=0.01%, 750=0.05%, 1000=2.61% 00:14:55.786 lat (msec) : 2=89.61%, 4=7.70%, 10=0.02% 00:14:55.786 cpu : usr=36.69%, sys=61.89%, ctx=10, majf=0, minf=762 00:14:55.786 IO depths : 1=1.4%, 2=3.0%, 4=6.1%, 8=12.3%, 16=24.8%, 32=50.8%, >=64=1.7% 00:14:55.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.786 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:55.786 issued rwts: total=183634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:55.786 00:14:55.786 Run status group 0 (all jobs): 00:14:55.786 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=717MiB (752MB), run=5002-5002msec 00:14:55.786 ----------------------------------------------------- 00:14:55.786 Suppressions used: 00:14:55.786 count bytes template 00:14:55.786 1 11 /usr/src/fio/parse.c 00:14:55.786 1 8 libtcmalloc_minimal.so 00:14:55.786 1 904 libcrypto.so 00:14:55.786 ----------------------------------------------------- 00:14:55.786 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:55.786 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:56.046 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:56.046 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:56.046 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:56.046 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:56.046 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:56.046 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:56.046 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:56.046 14:07:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.046 { 00:14:56.046 "subsystems": [ 00:14:56.046 { 00:14:56.046 "subsystem": "bdev", 00:14:56.046 "config": [ 00:14:56.046 { 00:14:56.046 "params": { 00:14:56.046 "io_mechanism": "io_uring_cmd", 00:14:56.046 "conserve_cpu": false, 00:14:56.046 "filename": "/dev/ng0n1", 00:14:56.046 "name": "xnvme_bdev" 00:14:56.046 }, 00:14:56.046 "method": "bdev_xnvme_create" 00:14:56.046 }, 00:14:56.046 { 00:14:56.046 "method": "bdev_wait_for_examine" 00:14:56.046 } 00:14:56.046 ] 00:14:56.046 } 00:14:56.046 ] 00:14:56.046 } 00:14:56.046 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:56.046 fio-3.35 00:14:56.046 Starting 1 thread 00:15:02.627 00:15:02.627 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71309: Mon Dec 9 14:08:03 2024 00:15:02.627 write: IOPS=30.7k, BW=120MiB/s (126MB/s)(600MiB/5001msec); 0 zone resets 00:15:02.627 slat (nsec): min=2912, max=99480, avg=4121.78, stdev=2217.10 00:15:02.627 clat (usec): min=65, max=200303, avg=1924.40, stdev=5810.88 00:15:02.627 lat (usec): min=68, max=200318, avg=1928.52, stdev=5810.90 00:15:02.627 clat percentiles (usec): 00:15:02.627 | 1.00th=[ 668], 5.00th=[ 1123], 10.00th=[ 1254], 20.00th=[ 1369], 00:15:02.627 | 30.00th=[ 1450], 40.00th=[ 1516], 50.00th=[ 1582], 60.00th=[ 1647], 00:15:02.627 | 70.00th=[ 1729], 80.00th=[ 1827], 90.00th=[ 1991], 95.00th=[ 2180], 00:15:02.627 | 99.00th=[ 6194], 99.50th=[ 10028], 99.90th=[108528], 99.95th=[141558], 00:15:02.627 | 99.99th=[198181] 00:15:02.627 bw ( KiB/s): min=69896, max=142744, per=99.43%, avg=122203.56, stdev=25910.56, samples=9 00:15:02.628 iops : min=17474, max=35686, avg=30550.89, stdev=6477.64, samples=9 00:15:02.628 lat (usec) : 100=0.01%, 250=0.04%, 500=0.30%, 750=1.11%, 1000=1.70% 00:15:02.628 lat (msec) : 2=87.17%, 4=8.11%, 10=1.06%, 20=0.25%, 50=0.04% 00:15:02.628 lat (msec) : 100=0.08%, 250=0.12% 00:15:02.628 cpu : usr=35.20%, sys=63.56%, ctx=9, majf=0, minf=763 00:15:02.628 IO depths : 1=1.3%, 2=2.7%, 4=5.5%, 8=11.3%, 16=23.8%, 32=53.3%, >=64=2.0% 00:15:02.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.628 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:02.628 issued rwts: total=0,153659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.628 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:02.628 00:15:02.628 Run status group 0 (all jobs): 00:15:02.628 WRITE: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=600MiB (629MB), run=5001-5001msec 00:15:02.889 ----------------------------------------------------- 00:15:02.889 Suppressions used: 00:15:02.889 count bytes template 00:15:02.889 1 11 /usr/src/fio/parse.c 00:15:02.889 1 8 libtcmalloc_minimal.so 00:15:02.889 1 904 libcrypto.so 00:15:02.889 ----------------------------------------------------- 00:15:02.889 00:15:02.889 ************************************ 00:15:02.889 END TEST xnvme_fio_plugin 00:15:02.889 ************************************ 00:15:02.889 00:15:02.889 real 0m13.745s 00:15:02.889 user 0m6.486s 00:15:02.889 sys 0m6.798s 00:15:02.889 14:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.889 14:08:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:02.889 14:08:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:02.889 14:08:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:02.889 14:08:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:02.889 14:08:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:02.889 14:08:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:02.889 14:08:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.889 14:08:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:02.889 ************************************ 00:15:02.889 START TEST xnvme_rpc 00:15:02.889 ************************************ 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:02.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71394 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71394 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71394 ']' 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.889 14:08:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:02.889 [2024-12-09 14:08:04.618631] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:15:02.889 [2024-12-09 14:08:04.618783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71394 ] 00:15:03.151 [2024-12-09 14:08:04.783917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.151 [2024-12-09 14:08:04.903297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.093 xnvme_bdev 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71394 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71394 ']' 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71394 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71394 00:15:04.093 killing process with pid 71394 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71394' 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71394 00:15:04.093 14:08:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71394 00:15:06.007 ************************************ 00:15:06.007 END TEST xnvme_rpc 00:15:06.007 ************************************ 00:15:06.007 00:15:06.007 real 0m2.900s 00:15:06.007 user 0m2.918s 00:15:06.007 sys 0m0.461s 00:15:06.007 14:08:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.007 14:08:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.007 14:08:07 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:06.007 14:08:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:06.007 14:08:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.007 14:08:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.007 ************************************ 00:15:06.007 START TEST xnvme_bdevperf 00:15:06.007 ************************************ 00:15:06.007 14:08:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:06.007 14:08:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:06.007 14:08:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:06.007 14:08:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:06.007 14:08:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:06.007 14:08:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:06.007 14:08:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:06.007 14:08:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:06.007 { 00:15:06.007 "subsystems": [ 00:15:06.007 { 00:15:06.007 "subsystem": "bdev", 00:15:06.007 "config": [ 00:15:06.007 { 00:15:06.007 "params": { 00:15:06.007 "io_mechanism": "io_uring_cmd", 00:15:06.007 "conserve_cpu": true, 00:15:06.007 "filename": "/dev/ng0n1", 00:15:06.007 "name": "xnvme_bdev" 00:15:06.007 }, 00:15:06.007 "method": "bdev_xnvme_create" 00:15:06.007 }, 00:15:06.007 { 00:15:06.007 "method": "bdev_wait_for_examine" 00:15:06.007 } 00:15:06.007 ] 00:15:06.007 } 00:15:06.007 ] 00:15:06.007 } 00:15:06.007 [2024-12-09 14:08:07.570715] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:15:06.007 [2024-12-09 14:08:07.570844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71463 ] 00:15:06.007 [2024-12-09 14:08:07.733040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.267 [2024-12-09 14:08:07.852805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.528 Running I/O for 5 seconds... 00:15:08.413 37919.00 IOPS, 148.12 MiB/s [2024-12-09T14:08:11.151Z] 37454.50 IOPS, 146.31 MiB/s [2024-12-09T14:08:12.536Z] 37168.33 IOPS, 145.19 MiB/s [2024-12-09T14:08:13.479Z] 37013.75 IOPS, 144.58 MiB/s [2024-12-09T14:08:13.479Z] 36908.20 IOPS, 144.17 MiB/s 00:15:11.685 Latency(us) 00:15:11.685 [2024-12-09T14:08:13.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.685 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:11.685 xnvme_bdev : 5.01 36847.78 143.94 0.00 0.00 1730.97 749.88 8570.09 00:15:11.685 [2024-12-09T14:08:13.479Z] =================================================================================================================== 00:15:11.685 [2024-12-09T14:08:13.479Z] Total : 36847.78 143.94 0.00 0.00 1730.97 749.88 8570.09 00:15:12.257 14:08:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:12.257 14:08:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:12.257 14:08:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:12.257 14:08:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:12.257 14:08:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:12.257 { 00:15:12.257 "subsystems": [ 00:15:12.257 { 00:15:12.257 "subsystem": "bdev", 00:15:12.257 "config": [ 00:15:12.257 { 00:15:12.257 "params": { 00:15:12.257 "io_mechanism": "io_uring_cmd", 00:15:12.257 "conserve_cpu": true, 00:15:12.257 "filename": "/dev/ng0n1", 00:15:12.257 "name": "xnvme_bdev" 00:15:12.257 }, 00:15:12.257 "method": "bdev_xnvme_create" 00:15:12.257 }, 00:15:12.257 { 00:15:12.257 "method": "bdev_wait_for_examine" 00:15:12.257 } 00:15:12.257 ] 00:15:12.257 } 00:15:12.257 ] 00:15:12.257 } 00:15:12.257 [2024-12-09 14:08:14.021619] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:15:12.257 [2024-12-09 14:08:14.021768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71538 ] 00:15:12.543 [2024-12-09 14:08:14.187820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.543 [2024-12-09 14:08:14.317669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.817 Running I/O for 5 seconds... 00:15:15.145 31661.00 IOPS, 123.68 MiB/s [2024-12-09T14:08:17.884Z] 27574.00 IOPS, 107.71 MiB/s [2024-12-09T14:08:18.829Z] 30172.67 IOPS, 117.86 MiB/s [2024-12-09T14:08:19.775Z] 32549.25 IOPS, 127.15 MiB/s [2024-12-09T14:08:19.775Z] 34214.80 IOPS, 133.65 MiB/s 00:15:17.981 Latency(us) 00:15:17.981 [2024-12-09T14:08:19.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.981 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:17.981 xnvme_bdev : 5.01 34182.75 133.53 0.00 0.00 1866.71 85.07 142767.66 00:15:17.981 [2024-12-09T14:08:19.775Z] =================================================================================================================== 00:15:17.981 [2024-12-09T14:08:19.775Z] Total : 34182.75 133.53 0.00 0.00 1866.71 85.07 142767.66 00:15:18.553 14:08:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:18.553 14:08:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:18.553 14:08:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:18.553 14:08:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:18.553 14:08:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:18.814 { 00:15:18.814 "subsystems": [ 00:15:18.814 { 00:15:18.814 "subsystem": "bdev", 00:15:18.814 "config": [ 00:15:18.814 { 00:15:18.814 "params": { 00:15:18.814 "io_mechanism": "io_uring_cmd", 00:15:18.814 "conserve_cpu": true, 00:15:18.814 "filename": "/dev/ng0n1", 00:15:18.814 "name": "xnvme_bdev" 00:15:18.814 }, 00:15:18.814 "method": "bdev_xnvme_create" 00:15:18.814 }, 00:15:18.814 { 00:15:18.814 "method": "bdev_wait_for_examine" 00:15:18.814 } 00:15:18.814 ] 00:15:18.814 } 00:15:18.814 ] 00:15:18.814 } 00:15:18.814 [2024-12-09 14:08:20.399586] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:15:18.814 [2024-12-09 14:08:20.399699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71612 ] 00:15:18.814 [2024-12-09 14:08:20.560164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.075 [2024-12-09 14:08:20.658241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.336 Running I/O for 5 seconds... 00:15:21.225 84370.00 IOPS, 329.57 MiB/s [2024-12-09T14:08:23.965Z] 81309.00 IOPS, 317.61 MiB/s [2024-12-09T14:08:24.910Z] 75377.00 IOPS, 294.44 MiB/s [2024-12-09T14:08:25.927Z] 68871.00 IOPS, 269.03 MiB/s [2024-12-09T14:08:25.927Z] 63173.20 IOPS, 246.77 MiB/s 00:15:24.133 Latency(us) 00:15:24.133 [2024-12-09T14:08:25.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.133 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:24.133 xnvme_bdev : 5.00 63138.68 246.64 0.00 0.00 1009.37 91.37 18047.61 00:15:24.133 [2024-12-09T14:08:25.927Z] =================================================================================================================== 00:15:24.133 [2024-12-09T14:08:25.927Z] Total : 63138.68 246.64 0.00 0.00 1009.37 91.37 18047.61 00:15:25.076 14:08:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:25.076 14:08:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:25.076 14:08:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:25.076 14:08:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:25.076 14:08:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:25.076 { 00:15:25.076 "subsystems": [ 00:15:25.076 { 00:15:25.076 "subsystem": "bdev", 00:15:25.076 "config": [ 00:15:25.076 { 00:15:25.076 "params": { 00:15:25.076 "io_mechanism": "io_uring_cmd", 00:15:25.076 "conserve_cpu": true, 00:15:25.076 "filename": "/dev/ng0n1", 00:15:25.076 "name": "xnvme_bdev" 00:15:25.076 }, 00:15:25.076 "method": "bdev_xnvme_create" 00:15:25.076 }, 00:15:25.076 { 00:15:25.076 "method": "bdev_wait_for_examine" 00:15:25.076 } 00:15:25.076 ] 00:15:25.076 } 00:15:25.076 ] 00:15:25.076 } 00:15:25.076 [2024-12-09 14:08:26.708882] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:15:25.076 [2024-12-09 14:08:26.709108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71686 ] 00:15:25.338 [2024-12-09 14:08:26.869964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.338 [2024-12-09 14:08:26.963566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.598 Running I/O for 5 seconds... 00:15:27.485 20783.00 IOPS, 81.18 MiB/s [2024-12-09T14:08:30.223Z] 20536.50 IOPS, 80.22 MiB/s [2024-12-09T14:08:31.614Z] 19439.00 IOPS, 75.93 MiB/s [2024-12-09T14:08:32.558Z] 18257.25 IOPS, 71.32 MiB/s [2024-12-09T14:08:32.558Z] 17756.00 IOPS, 69.36 MiB/s 00:15:30.764 Latency(us) 00:15:30.764 [2024-12-09T14:08:32.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.764 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:30.764 xnvme_bdev : 5.00 17744.81 69.32 0.00 0.00 3600.49 93.34 28432.54 00:15:30.764 [2024-12-09T14:08:32.558Z] =================================================================================================================== 00:15:30.764 [2024-12-09T14:08:32.558Z] Total : 17744.81 69.32 0.00 0.00 3600.49 93.34 28432.54 00:15:31.337 00:15:31.337 real 0m25.521s 00:15:31.337 user 0m17.116s 00:15:31.337 sys 0m5.985s 00:15:31.337 14:08:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.337 14:08:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:31.337 ************************************ 00:15:31.337 END TEST xnvme_bdevperf 00:15:31.337 ************************************ 00:15:31.337 14:08:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:31.337 14:08:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:31.337 14:08:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.337 14:08:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.337 ************************************ 00:15:31.337 START TEST xnvme_fio_plugin 00:15:31.337 ************************************ 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:31.337 14:08:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:31.337 { 00:15:31.337 "subsystems": [ 00:15:31.337 { 00:15:31.337 "subsystem": "bdev", 00:15:31.337 "config": [ 00:15:31.337 { 00:15:31.337 "params": { 00:15:31.337 "io_mechanism": "io_uring_cmd", 00:15:31.337 "conserve_cpu": true, 00:15:31.337 "filename": "/dev/ng0n1", 00:15:31.337 "name": "xnvme_bdev" 00:15:31.337 }, 00:15:31.337 "method": "bdev_xnvme_create" 00:15:31.337 }, 00:15:31.337 { 00:15:31.337 "method": "bdev_wait_for_examine" 00:15:31.337 } 00:15:31.337 ] 00:15:31.337 } 00:15:31.337 ] 00:15:31.337 } 00:15:31.598 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:31.598 fio-3.35 00:15:31.598 Starting 1 thread 00:15:38.238 00:15:38.238 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71799: Mon Dec 9 14:08:38 2024 00:15:38.238 read: IOPS=46.6k, BW=182MiB/s (191MB/s)(910MiB/5001msec) 00:15:38.238 slat (usec): min=2, max=117, avg= 3.44, stdev= 1.70 00:15:38.238 clat (usec): min=629, max=3891, avg=1239.28, stdev=247.56 00:15:38.238 lat (usec): min=632, max=3925, avg=1242.72, stdev=248.04 00:15:38.238 clat percentiles (usec): 00:15:38.238 | 1.00th=[ 816], 5.00th=[ 914], 10.00th=[ 979], 20.00th=[ 1057], 00:15:38.238 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1188], 60.00th=[ 1237], 00:15:38.238 | 70.00th=[ 1303], 80.00th=[ 1418], 90.00th=[ 1582], 95.00th=[ 1729], 00:15:38.238 | 99.00th=[ 2008], 99.50th=[ 2114], 99.90th=[ 2311], 99.95th=[ 2376], 00:15:38.238 | 99.99th=[ 3523] 00:15:38.238 bw ( KiB/s): min=164848, max=199680, per=100.00%, avg=190974.22, stdev=10425.82, samples=9 00:15:38.238 iops : min=41212, max=49920, avg=47743.56, stdev=2606.45, samples=9 00:15:38.238 lat (usec) : 750=0.22%, 1000=12.14% 00:15:38.238 lat (msec) : 2=86.64%, 4=1.01% 00:15:38.238 cpu : usr=67.02%, sys=30.56%, ctx=11, majf=0, minf=762 00:15:38.238 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:38.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.238 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:38.238 issued rwts: total=232894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.238 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:38.238 00:15:38.238 Run status group 0 (all jobs): 00:15:38.238 READ: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=910MiB (954MB), run=5001-5001msec 00:15:38.238 ----------------------------------------------------- 00:15:38.238 Suppressions used: 00:15:38.238 count bytes template 00:15:38.238 1 11 /usr/src/fio/parse.c 00:15:38.238 1 8 libtcmalloc_minimal.so 00:15:38.238 1 904 libcrypto.so 00:15:38.238 ----------------------------------------------------- 00:15:38.238 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:38.238 14:08:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:38.238 14:08:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:38.238 14:08:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:38.238 14:08:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:38.238 14:08:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:38.238 { 00:15:38.238 "subsystems": [ 00:15:38.238 { 00:15:38.238 "subsystem": "bdev", 00:15:38.238 "config": [ 00:15:38.238 { 00:15:38.238 "params": { 00:15:38.238 "io_mechanism": "io_uring_cmd", 00:15:38.238 "conserve_cpu": true, 00:15:38.238 "filename": "/dev/ng0n1", 00:15:38.238 "name": "xnvme_bdev" 00:15:38.238 }, 00:15:38.238 "method": "bdev_xnvme_create" 00:15:38.238 }, 00:15:38.238 { 00:15:38.238 "method": "bdev_wait_for_examine" 00:15:38.238 } 00:15:38.238 ] 00:15:38.238 } 00:15:38.238 ] 00:15:38.238 } 00:15:38.499 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:38.499 fio-3.35 00:15:38.499 Starting 1 thread 00:15:45.084 00:15:45.084 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71894: Mon Dec 9 14:08:45 2024 00:15:45.084 write: IOPS=44.2k, BW=173MiB/s (181MB/s)(864MiB/5001msec); 0 zone resets 00:15:45.084 slat (nsec): min=2902, max=76308, avg=3695.65, stdev=1756.41 00:15:45.084 clat (usec): min=664, max=3982, avg=1302.83, stdev=232.12 00:15:45.084 lat (usec): min=667, max=3985, avg=1306.52, stdev=232.55 00:15:45.084 clat percentiles (usec): 00:15:45.084 | 1.00th=[ 898], 5.00th=[ 988], 10.00th=[ 1045], 20.00th=[ 1106], 00:15:45.084 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1270], 60.00th=[ 1319], 00:15:45.084 | 70.00th=[ 1401], 80.00th=[ 1483], 90.00th=[ 1598], 95.00th=[ 1713], 00:15:45.084 | 99.00th=[ 1991], 99.50th=[ 2114], 99.90th=[ 2474], 99.95th=[ 2868], 00:15:45.084 | 99.99th=[ 3359] 00:15:45.084 bw ( KiB/s): min=168448, max=182712, per=99.91%, avg=176670.22, stdev=4540.68, samples=9 00:15:45.084 iops : min=42112, max=45678, avg=44167.56, stdev=1135.17, samples=9 00:15:45.084 lat (usec) : 750=0.02%, 1000=5.94% 00:15:45.084 lat (msec) : 2=93.13%, 4=0.92% 00:15:45.084 cpu : usr=68.04%, sys=29.18%, ctx=14, majf=0, minf=763 00:15:45.084 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:15:45.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.084 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:45.084 issued rwts: total=0,221082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.084 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:45.084 00:15:45.084 Run status group 0 (all jobs): 00:15:45.084 WRITE: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=864MiB (906MB), run=5001-5001msec 00:15:45.084 ----------------------------------------------------- 00:15:45.084 Suppressions used: 00:15:45.084 count bytes template 00:15:45.084 1 11 /usr/src/fio/parse.c 00:15:45.084 1 8 libtcmalloc_minimal.so 00:15:45.084 1 904 libcrypto.so 00:15:45.084 ----------------------------------------------------- 00:15:45.084 00:15:45.084 ************************************ 00:15:45.084 END TEST xnvme_fio_plugin 00:15:45.084 ************************************ 00:15:45.084 00:15:45.084 real 0m13.710s 00:15:45.084 user 0m9.506s 00:15:45.084 sys 0m3.611s 00:15:45.084 14:08:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.084 14:08:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:45.084 14:08:46 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71394 00:15:45.084 14:08:46 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71394 ']' 00:15:45.084 14:08:46 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71394 00:15:45.084 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71394) - No such process 00:15:45.084 14:08:46 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71394 is not found' 00:15:45.084 Process with pid 71394 is not found 00:15:45.084 ************************************ 00:15:45.084 END TEST nvme_xnvme 00:15:45.084 ************************************ 00:15:45.084 00:15:45.084 real 3m28.250s 00:15:45.084 user 1m52.623s 00:15:45.084 sys 1m19.942s 00:15:45.084 14:08:46 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.084 14:08:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.346 14:08:46 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:45.346 14:08:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:45.346 14:08:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.346 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:15:45.346 ************************************ 00:15:45.346 START TEST blockdev_xnvme 00:15:45.346 ************************************ 00:15:45.346 14:08:46 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:45.346 * Looking for test storage... 00:15:45.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:45.346 14:08:46 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:45.346 14:08:46 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:45.346 14:08:46 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:45.346 14:08:47 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.346 14:08:47 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:45.346 14:08:47 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.346 14:08:47 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:45.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.346 --rc genhtml_branch_coverage=1 00:15:45.346 --rc genhtml_function_coverage=1 00:15:45.346 --rc genhtml_legend=1 00:15:45.346 --rc geninfo_all_blocks=1 00:15:45.346 --rc geninfo_unexecuted_blocks=1 00:15:45.346 00:15:45.346 ' 00:15:45.346 14:08:47 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:45.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.346 --rc genhtml_branch_coverage=1 00:15:45.346 --rc genhtml_function_coverage=1 00:15:45.346 --rc genhtml_legend=1 00:15:45.346 --rc geninfo_all_blocks=1 00:15:45.346 --rc geninfo_unexecuted_blocks=1 00:15:45.346 00:15:45.346 ' 00:15:45.346 14:08:47 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:45.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.346 --rc genhtml_branch_coverage=1 00:15:45.346 --rc genhtml_function_coverage=1 00:15:45.346 --rc genhtml_legend=1 00:15:45.346 --rc geninfo_all_blocks=1 00:15:45.346 --rc geninfo_unexecuted_blocks=1 00:15:45.346 00:15:45.346 ' 00:15:45.346 14:08:47 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:45.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.346 --rc genhtml_branch_coverage=1 00:15:45.346 --rc genhtml_function_coverage=1 00:15:45.346 --rc genhtml_legend=1 00:15:45.346 --rc geninfo_all_blocks=1 00:15:45.346 --rc geninfo_unexecuted_blocks=1 00:15:45.346 00:15:45.346 ' 00:15:45.346 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:45.346 14:08:47 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:45.346 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:45.346 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:45.346 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:45.346 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:45.346 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:45.346 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:45.346 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:45.346 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:45.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72024 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72024 00:15:45.347 14:08:47 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72024 ']' 00:15:45.347 14:08:47 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.347 14:08:47 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.347 14:08:47 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.347 14:08:47 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.347 14:08:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.347 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:45.347 [2024-12-09 14:08:47.126430] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:15:45.347 [2024-12-09 14:08:47.126564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72024 ] 00:15:45.606 [2024-12-09 14:08:47.284660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.606 [2024-12-09 14:08:47.381962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.174 14:08:47 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.174 14:08:47 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:46.175 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:46.175 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:46.175 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:46.175 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:46.175 14:08:47 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:46.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:47.312 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:47.312 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:47.312 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:47.312 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:47.312 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.313 14:08:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.313 14:08:48 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring -c' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:47.313 nvme0n1 00:15:47.313 nvme1n1 00:15:47.313 nvme2n1 00:15:47.313 nvme2n2 00:15:47.313 nvme2n3 00:15:47.313 nvme3n1 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.313 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.313 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:47.313 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.313 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.313 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.313 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:47.313 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.313 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:47.313 14:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.574 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:47.574 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:47.574 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2ec1585b-c227-4b94-ac1f-81efdb4554f9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2ec1585b-c227-4b94-ac1f-81efdb4554f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "88f00c06-d62f-4988-937a-51464615074f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "88f00c06-d62f-4988-937a-51464615074f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9fb87ee3-1268-4e2e-940c-923a508949d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9fb87ee3-1268-4e2e-940c-923a508949d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2dd8b934-95f7-44b0-9df4-8684830ffb55"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2dd8b934-95f7-44b0-9df4-8684830ffb55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "aab062d2-d407-4561-9250-62e4d6c94e85"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aab062d2-d407-4561-9250-62e4d6c94e85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8220e13c-9e91-4163-bef0-dfc87d78b10b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8220e13c-9e91-4163-bef0-dfc87d78b10b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:47.574 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:47.574 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:47.574 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:47.574 14:08:49 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72024 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72024 ']' 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72024 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72024 00:15:47.574 killing process with pid 72024 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72024' 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72024 00:15:47.574 14:08:49 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72024 00:15:48.957 14:08:50 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:48.957 14:08:50 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:48.957 14:08:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:48.957 14:08:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.957 14:08:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.957 ************************************ 00:15:48.957 START TEST bdev_hello_world 00:15:48.957 ************************************ 00:15:48.957 14:08:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:48.957 [2024-12-09 14:08:50.747479] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:15:48.957 [2024-12-09 14:08:50.747609] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72308 ] 00:15:49.217 [2024-12-09 14:08:50.908419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.217 [2024-12-09 14:08:51.001946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.785 [2024-12-09 14:08:51.358434] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:49.785 [2024-12-09 14:08:51.358475] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:49.785 [2024-12-09 14:08:51.358491] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:49.785 [2024-12-09 14:08:51.360330] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:49.785 [2024-12-09 14:08:51.361112] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:49.785 [2024-12-09 14:08:51.361137] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:49.785 [2024-12-09 14:08:51.361533] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:49.785 00:15:49.785 [2024-12-09 14:08:51.361589] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:50.355 00:15:50.355 ************************************ 00:15:50.355 END TEST bdev_hello_world 00:15:50.355 ************************************ 00:15:50.355 real 0m1.394s 00:15:50.355 user 0m1.075s 00:15:50.355 sys 0m0.178s 00:15:50.355 14:08:52 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.355 14:08:52 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:50.355 14:08:52 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:50.355 14:08:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.355 14:08:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.355 14:08:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.355 ************************************ 00:15:50.355 START TEST bdev_bounds 00:15:50.355 ************************************ 00:15:50.355 Process bdevio pid: 72340 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72340 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72340' 00:15:50.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72340 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72340 ']' 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.355 14:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:50.615 [2024-12-09 14:08:52.203094] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:15:50.615 [2024-12-09 14:08:52.203212] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72340 ] 00:15:50.615 [2024-12-09 14:08:52.357710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:50.874 [2024-12-09 14:08:52.456154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.874 [2024-12-09 14:08:52.456401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.874 [2024-12-09 14:08:52.456407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.460 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.460 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:51.460 14:08:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:51.460 I/O targets: 00:15:51.460 nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:51.460 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:51.460 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:51.460 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:51.460 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:51.460 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:51.460 00:15:51.460 00:15:51.460 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.460 http://cunit.sourceforge.net/ 00:15:51.460 00:15:51.460 00:15:51.460 Suite: bdevio tests on: nvme3n1 00:15:51.460 Test: blockdev write read block ...passed 00:15:51.460 Test: blockdev write zeroes read block ...passed 00:15:51.460 Test: blockdev write zeroes read no split ...passed 00:15:51.460 Test: blockdev write zeroes read split ...passed 00:15:51.460 Test: blockdev write zeroes read split partial ...passed 00:15:51.460 Test: blockdev reset ...passed 00:15:51.460 Test: blockdev write read 8 blocks ...passed 00:15:51.460 Test: blockdev write read size > 128k ...passed 00:15:51.460 Test: blockdev write read invalid size ...passed 00:15:51.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:51.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:51.460 Test: blockdev write read max offset ...passed 00:15:51.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:51.460 Test: blockdev writev readv 8 blocks ...passed 00:15:51.460 Test: blockdev writev readv 30 x 1block ...passed 00:15:51.460 Test: blockdev writev readv block ...passed 00:15:51.460 Test: blockdev writev readv size > 128k ...passed 00:15:51.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:51.460 Test: blockdev comparev and writev ...passed 00:15:51.460 Test: blockdev nvme passthru rw ...passed 00:15:51.460 Test: blockdev nvme passthru vendor specific ...passed 00:15:51.460 Test: blockdev nvme admin passthru ...passed 00:15:51.460 Test: blockdev copy ...passed 00:15:51.460 Suite: bdevio tests on: nvme2n3 00:15:51.460 Test: blockdev write read block ...passed 00:15:51.460 Test: blockdev write zeroes read block ...passed 00:15:51.460 Test: blockdev write zeroes read no split ...passed 00:15:51.460 Test: blockdev write zeroes read split ...passed 00:15:51.772 Test: blockdev write zeroes read split partial ...passed 00:15:51.772 Test: blockdev reset ...passed 00:15:51.772 Test: blockdev write read 8 blocks ...passed 00:15:51.772 Test: blockdev write read size > 128k ...passed 00:15:51.772 Test: blockdev write read invalid size ...passed 00:15:51.772 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:51.772 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:51.772 Test: blockdev write read max offset ...passed 00:15:51.772 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:51.772 Test: blockdev writev readv 8 blocks ...passed 00:15:51.772 Test: blockdev writev readv 30 x 1block ...passed 00:15:51.772 Test: blockdev writev readv block ...passed 00:15:51.772 Test: blockdev writev readv size > 128k ...passed 00:15:51.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:51.772 Test: blockdev comparev and writev ...passed 00:15:51.772 Test: blockdev nvme passthru rw ...passed 00:15:51.772 Test: blockdev nvme passthru vendor specific ...passed 00:15:51.772 Test: blockdev nvme admin passthru ...passed 00:15:51.772 Test: blockdev copy ...passed 00:15:51.772 Suite: bdevio tests on: nvme2n2 00:15:51.772 Test: blockdev write read block ...passed 00:15:51.772 Test: blockdev write zeroes read block ...passed 00:15:51.772 Test: blockdev write zeroes read no split ...passed 00:15:51.772 Test: blockdev write zeroes read split ...passed 00:15:51.772 Test: blockdev write zeroes read split partial ...passed 00:15:51.772 Test: blockdev reset ...passed 00:15:51.772 Test: blockdev write read 8 blocks ...passed 00:15:51.772 Test: blockdev write read size > 128k ...passed 00:15:51.772 Test: blockdev write read invalid size ...passed 00:15:51.772 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:51.772 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:51.772 Test: blockdev write read max offset ...passed 00:15:51.772 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:51.772 Test: blockdev writev readv 8 blocks ...passed 00:15:51.772 Test: blockdev writev readv 30 x 1block ...passed 00:15:51.772 Test: blockdev writev readv block ...passed 00:15:51.772 Test: blockdev writev readv size > 128k ...passed 00:15:51.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:51.772 Test: blockdev comparev and writev ...passed 00:15:51.772 Test: blockdev nvme passthru rw ...passed 00:15:51.772 Test: blockdev nvme passthru vendor specific ...passed 00:15:51.772 Test: blockdev nvme admin passthru ...passed 00:15:51.772 Test: blockdev copy ...passed 00:15:51.772 Suite: bdevio tests on: nvme2n1 00:15:51.773 Test: blockdev write read block ...passed 00:15:51.773 Test: blockdev write zeroes read block ...passed 00:15:51.773 Test: blockdev write zeroes read no split ...passed 00:15:51.773 Test: blockdev write zeroes read split ...passed 00:15:51.773 Test: blockdev write zeroes read split partial ...passed 00:15:51.773 Test: blockdev reset ...passed 00:15:51.773 Test: blockdev write read 8 blocks ...passed 00:15:51.773 Test: blockdev write read size > 128k ...passed 00:15:51.773 Test: blockdev write read invalid size ...passed 00:15:51.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:51.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:51.773 Test: blockdev write read max offset ...passed 00:15:51.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:51.773 Test: blockdev writev readv 8 blocks ...passed 00:15:51.773 Test: blockdev writev readv 30 x 1block ...passed 00:15:51.773 Test: blockdev writev readv block ...passed 00:15:51.773 Test: blockdev writev readv size > 128k ...passed 00:15:51.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:51.773 Test: blockdev comparev and writev ...passed 00:15:51.773 Test: blockdev nvme passthru rw ...passed 00:15:51.773 Test: blockdev nvme passthru vendor specific ...passed 00:15:51.773 Test: blockdev nvme admin passthru ...passed 00:15:51.773 Test: blockdev copy ...passed 00:15:51.773 Suite: bdevio tests on: nvme1n1 00:15:51.773 Test: blockdev write read block ...passed 00:15:51.773 Test: blockdev write zeroes read block ...passed 00:15:51.773 Test: blockdev write zeroes read no split ...passed 00:15:51.773 Test: blockdev write zeroes read split ...passed 00:15:51.773 Test: blockdev write zeroes read split partial ...passed 00:15:51.773 Test: blockdev reset ...passed 00:15:51.773 Test: blockdev write read 8 blocks ...passed 00:15:51.773 Test: blockdev write read size > 128k ...passed 00:15:51.773 Test: blockdev write read invalid size ...passed 00:15:51.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:51.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:51.773 Test: blockdev write read max offset ...passed 00:15:51.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:51.773 Test: blockdev writev readv 8 blocks ...passed 00:15:51.773 Test: blockdev writev readv 30 x 1block ...passed 00:15:51.773 Test: blockdev writev readv block ...passed 00:15:51.773 Test: blockdev writev readv size > 128k ...passed 00:15:51.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:51.773 Test: blockdev comparev and writev ...passed 00:15:51.773 Test: blockdev nvme passthru rw ...passed 00:15:51.773 Test: blockdev nvme passthru vendor specific ...passed 00:15:51.773 Test: blockdev nvme admin passthru ...passed 00:15:51.773 Test: blockdev copy ...passed 00:15:51.773 Suite: bdevio tests on: nvme0n1 00:15:51.773 Test: blockdev write read block ...passed 00:15:51.773 Test: blockdev write zeroes read block ...passed 00:15:51.773 Test: blockdev write zeroes read no split ...passed 00:15:51.773 Test: blockdev write zeroes read split ...passed 00:15:51.773 Test: blockdev write zeroes read split partial ...passed 00:15:51.773 Test: blockdev reset ...passed 00:15:51.773 Test: blockdev write read 8 blocks ...passed 00:15:51.773 Test: blockdev write read size > 128k ...passed 00:15:51.773 Test: blockdev write read invalid size ...passed 00:15:51.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:51.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:51.773 Test: blockdev write read max offset ...passed 00:15:51.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:51.773 Test: blockdev writev readv 8 blocks ...passed 00:15:51.773 Test: blockdev writev readv 30 x 1block ...passed 00:15:51.773 Test: blockdev writev readv block ...passed 00:15:51.773 Test: blockdev writev readv size > 128k ...passed 00:15:51.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:52.034 Test: blockdev comparev and writev ...passed 00:15:52.034 Test: blockdev nvme passthru rw ...passed 00:15:52.034 Test: blockdev nvme passthru vendor specific ...passed 00:15:52.034 Test: blockdev nvme admin passthru ...passed 00:15:52.034 Test: blockdev copy ...passed 00:15:52.034 00:15:52.034 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.034 suites 6 6 n/a 0 0 00:15:52.034 tests 138 138 138 0 0 00:15:52.034 asserts 780 780 780 0 n/a 00:15:52.034 00:15:52.034 Elapsed time = 1.117 seconds 00:15:52.034 0 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72340 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72340 ']' 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72340 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72340 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72340' 00:15:52.034 killing process with pid 72340 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72340 00:15:52.034 14:08:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72340 00:15:52.606 14:08:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:52.606 00:15:52.606 real 0m2.198s 00:15:52.606 user 0m5.466s 00:15:52.606 sys 0m0.272s 00:15:52.606 14:08:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.606 14:08:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:52.606 ************************************ 00:15:52.606 END TEST bdev_bounds 00:15:52.606 ************************************ 00:15:52.606 14:08:54 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:52.606 14:08:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:52.606 14:08:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.606 14:08:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.867 ************************************ 00:15:52.867 START TEST bdev_nbd 00:15:52.867 ************************************ 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72404 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:52.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72404 /var/tmp/spdk-nbd.sock 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72404 ']' 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:52.867 14:08:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:52.867 [2024-12-09 14:08:54.467177] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:15:52.867 [2024-12-09 14:08:54.467435] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.867 [2024-12-09 14:08:54.620621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.127 [2024-12-09 14:08:54.716652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:53.697 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:53.956 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:53.956 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:53.956 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:53.957 1+0 records in 00:15:53.957 1+0 records out 00:15:53.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107654 s, 3.8 MB/s 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:53.957 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.217 1+0 records in 00:15:54.217 1+0 records out 00:15:54.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127162 s, 3.2 MB/s 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.217 14:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.217 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.217 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:54.217 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.217 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.217 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.217 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.217 1+0 records in 00:15:54.217 1+0 records out 00:15:54.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119757 s, 3.4 MB/s 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.478 1+0 records in 00:15:54.478 1+0 records out 00:15:54.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977304 s, 4.2 MB/s 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:54.478 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.739 1+0 records in 00:15:54.739 1+0 records out 00:15:54.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755124 s, 5.4 MB/s 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:54.739 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.000 1+0 records in 00:15:55.000 1+0 records out 00:15:55.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102758 s, 4.0 MB/s 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:55.000 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd0", 00:15:55.261 "bdev_name": "nvme0n1" 00:15:55.261 }, 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd1", 00:15:55.261 "bdev_name": "nvme1n1" 00:15:55.261 }, 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd2", 00:15:55.261 "bdev_name": "nvme2n1" 00:15:55.261 }, 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd3", 00:15:55.261 "bdev_name": "nvme2n2" 00:15:55.261 }, 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd4", 00:15:55.261 "bdev_name": "nvme2n3" 00:15:55.261 }, 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd5", 00:15:55.261 "bdev_name": "nvme3n1" 00:15:55.261 } 00:15:55.261 ]' 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd0", 00:15:55.261 "bdev_name": "nvme0n1" 00:15:55.261 }, 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd1", 00:15:55.261 "bdev_name": "nvme1n1" 00:15:55.261 }, 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd2", 00:15:55.261 "bdev_name": "nvme2n1" 00:15:55.261 }, 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd3", 00:15:55.261 "bdev_name": "nvme2n2" 00:15:55.261 }, 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd4", 00:15:55.261 "bdev_name": "nvme2n3" 00:15:55.261 }, 00:15:55.261 { 00:15:55.261 "nbd_device": "/dev/nbd5", 00:15:55.261 "bdev_name": "nvme3n1" 00:15:55.261 } 00:15:55.261 ]' 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.261 14:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:55.523 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:55.523 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:55.523 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:55.523 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.523 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.523 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:55.523 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:55.523 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.523 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.523 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.784 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:56.045 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:56.045 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:56.045 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:56.045 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.045 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.045 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:56.045 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:56.045 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.045 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.045 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:56.307 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:56.307 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:56.307 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:56.307 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.307 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.307 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:56.307 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:56.307 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.307 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.307 14:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.566 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:56.826 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:57.085 /dev/nbd0 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.085 1+0 records in 00:15:57.085 1+0 records out 00:15:57.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383897 s, 10.7 MB/s 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:57.085 /dev/nbd1 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.085 1+0 records in 00:15:57.085 1+0 records out 00:15:57.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529219 s, 7.7 MB/s 00:15:57.085 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.343 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.343 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.343 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.343 14:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.343 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.343 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:57.343 14:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:57.343 /dev/nbd10 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.343 1+0 records in 00:15:57.343 1+0 records out 00:15:57.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451879 s, 9.1 MB/s 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:57.343 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:57.602 /dev/nbd11 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.602 1+0 records in 00:15:57.602 1+0 records out 00:15:57.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526629 s, 7.8 MB/s 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:57.602 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:57.860 /dev/nbd12 00:15:57.860 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:57.860 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:57.860 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:57.860 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.860 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.860 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.861 1+0 records in 00:15:57.861 1+0 records out 00:15:57.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573107 s, 7.1 MB/s 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:57.861 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:58.119 /dev/nbd13 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.119 1+0 records in 00:15:58.119 1+0 records out 00:15:58.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381667 s, 10.7 MB/s 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.119 14:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd0", 00:15:58.401 "bdev_name": "nvme0n1" 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd1", 00:15:58.401 "bdev_name": "nvme1n1" 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd10", 00:15:58.401 "bdev_name": "nvme2n1" 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd11", 00:15:58.401 "bdev_name": "nvme2n2" 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd12", 00:15:58.401 "bdev_name": "nvme2n3" 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd13", 00:15:58.401 "bdev_name": "nvme3n1" 00:15:58.401 } 00:15:58.401 ]' 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd0", 00:15:58.401 "bdev_name": "nvme0n1" 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd1", 00:15:58.401 "bdev_name": "nvme1n1" 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd10", 00:15:58.401 "bdev_name": "nvme2n1" 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd11", 00:15:58.401 "bdev_name": "nvme2n2" 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd12", 00:15:58.401 "bdev_name": "nvme2n3" 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "nbd_device": "/dev/nbd13", 00:15:58.401 "bdev_name": "nvme3n1" 00:15:58.401 } 00:15:58.401 ]' 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:58.401 /dev/nbd1 00:15:58.401 /dev/nbd10 00:15:58.401 /dev/nbd11 00:15:58.401 /dev/nbd12 00:15:58.401 /dev/nbd13' 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:58.401 /dev/nbd1 00:15:58.401 /dev/nbd10 00:15:58.401 /dev/nbd11 00:15:58.401 /dev/nbd12 00:15:58.401 /dev/nbd13' 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:58.401 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:58.402 256+0 records in 00:15:58.402 256+0 records out 00:15:58.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111513 s, 94.0 MB/s 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:58.402 256+0 records in 00:15:58.402 256+0 records out 00:15:58.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0644432 s, 16.3 MB/s 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:58.402 256+0 records in 00:15:58.402 256+0 records out 00:15:58.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0579682 s, 18.1 MB/s 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:58.402 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:58.660 256+0 records in 00:15:58.660 256+0 records out 00:15:58.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0566238 s, 18.5 MB/s 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:58.660 256+0 records in 00:15:58.660 256+0 records out 00:15:58.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0601748 s, 17.4 MB/s 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:58.660 256+0 records in 00:15:58.660 256+0 records out 00:15:58.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0555363 s, 18.9 MB/s 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:58.660 256+0 records in 00:15:58.660 256+0 records out 00:15:58.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0590074 s, 17.8 MB/s 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:58.660 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.919 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:59.178 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:59.178 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:59.178 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:59.178 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.178 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.178 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:59.178 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.178 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.178 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.178 14:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:59.436 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:59.436 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:59.436 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:59.436 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.436 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.436 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:59.436 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.436 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.436 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.436 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:59.694 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:59.694 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:59.694 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:59.694 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.694 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.694 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:59.694 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.694 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.694 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.694 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.951 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:00.209 14:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:00.467 malloc_lvol_verify 00:16:00.467 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:00.726 77fe43b6-c5ea-4a88-9680-faf442b4865b 00:16:00.726 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:00.984 9b9ee2f2-aa75-48f2-a772-a3fbc3e2f507 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:00.984 /dev/nbd0 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:00.984 mke2fs 1.47.0 (5-Feb-2023) 00:16:00.984 Discarding device blocks: 0/4096 done 00:16:00.984 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:00.984 00:16:00.984 Allocating group tables: 0/1 done 00:16:00.984 Writing inode tables: 0/1 done 00:16:00.984 Creating journal (1024 blocks): done 00:16:00.984 Writing superblocks and filesystem accounting information: 0/1 done 00:16:00.984 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.984 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72404 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72404 ']' 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72404 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.242 14:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72404 00:16:01.242 killing process with pid 72404 00:16:01.242 14:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.242 14:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.242 14:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72404' 00:16:01.242 14:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72404 00:16:01.242 14:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72404 00:16:02.181 ************************************ 00:16:02.181 END TEST bdev_nbd 00:16:02.181 ************************************ 00:16:02.181 14:09:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:02.181 00:16:02.181 real 0m9.205s 00:16:02.181 user 0m13.170s 00:16:02.181 sys 0m3.092s 00:16:02.181 14:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.181 14:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:02.181 14:09:03 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:02.181 14:09:03 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:02.181 14:09:03 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:02.181 14:09:03 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:02.181 14:09:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:02.181 14:09:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.181 14:09:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:02.181 ************************************ 00:16:02.181 START TEST bdev_fio 00:16:02.181 ************************************ 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:02.181 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:02.181 ************************************ 00:16:02.181 START TEST bdev_fio_rw_verify 00:16:02.181 ************************************ 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:02.181 14:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:02.181 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:02.181 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:02.181 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:02.181 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:02.181 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:02.181 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:02.181 fio-3.35 00:16:02.181 Starting 6 threads 00:16:14.439 00:16:14.439 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72789: Mon Dec 9 14:09:14 2024 00:16:14.439 read: IOPS=21.2k, BW=83.0MiB/s (87.0MB/s)(830MiB/10001msec) 00:16:14.439 slat (usec): min=2, max=2849, avg= 5.55, stdev=16.89 00:16:14.439 clat (usec): min=55, max=6956, avg=841.81, stdev=657.52 00:16:14.439 lat (usec): min=68, max=6960, avg=847.37, stdev=658.25 00:16:14.439 clat percentiles (usec): 00:16:14.439 | 50.000th=[ 635], 99.000th=[ 2999], 99.900th=[ 4228], 99.990th=[ 6194], 00:16:14.439 | 99.999th=[ 6980] 00:16:14.439 write: IOPS=21.5k, BW=84.1MiB/s (88.2MB/s)(841MiB/10001msec); 0 zone resets 00:16:14.439 slat (usec): min=13, max=4281, avg=36.08, stdev=119.39 00:16:14.439 clat (usec): min=67, max=6851, avg=1112.01, stdev=740.53 00:16:14.439 lat (usec): min=80, max=6874, avg=1148.09, stdev=754.87 00:16:14.439 clat percentiles (usec): 00:16:14.439 | 50.000th=[ 938], 99.000th=[ 3425], 99.900th=[ 4817], 99.990th=[ 6325], 00:16:14.439 | 99.999th=[ 6783] 00:16:14.439 bw ( KiB/s): min=53137, max=149608, per=100.00%, avg=87787.32, stdev=4265.96, samples=114 00:16:14.439 iops : min=13283, max=37402, avg=21946.11, stdev=1066.47, samples=114 00:16:14.439 lat (usec) : 100=0.07%, 250=9.12%, 500=21.70%, 750=17.40%, 1000=12.09% 00:16:14.439 lat (msec) : 2=30.66%, 4=8.69%, 10=0.27% 00:16:14.439 cpu : usr=40.58%, sys=33.73%, ctx=7092, majf=0, minf=19345 00:16:14.439 IO depths : 1=11.3%, 2=23.6%, 4=51.3%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.439 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.439 issued rwts: total=212448,215281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.439 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:14.439 00:16:14.439 Run status group 0 (all jobs): 00:16:14.439 READ: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=830MiB (870MB), run=10001-10001msec 00:16:14.439 WRITE: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=841MiB (882MB), run=10001-10001msec 00:16:14.439 ----------------------------------------------------- 00:16:14.439 Suppressions used: 00:16:14.439 count bytes template 00:16:14.439 6 48 /usr/src/fio/parse.c 00:16:14.439 2674 256704 /usr/src/fio/iolog.c 00:16:14.439 1 8 libtcmalloc_minimal.so 00:16:14.439 1 904 libcrypto.so 00:16:14.439 ----------------------------------------------------- 00:16:14.439 00:16:14.439 00:16:14.439 real 0m11.799s 00:16:14.439 user 0m25.750s 00:16:14.439 sys 0m20.527s 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.439 ************************************ 00:16:14.439 END TEST bdev_fio_rw_verify 00:16:14.439 ************************************ 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2ec1585b-c227-4b94-ac1f-81efdb4554f9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2ec1585b-c227-4b94-ac1f-81efdb4554f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "88f00c06-d62f-4988-937a-51464615074f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "88f00c06-d62f-4988-937a-51464615074f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9fb87ee3-1268-4e2e-940c-923a508949d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9fb87ee3-1268-4e2e-940c-923a508949d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2dd8b934-95f7-44b0-9df4-8684830ffb55"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2dd8b934-95f7-44b0-9df4-8684830ffb55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "aab062d2-d407-4561-9250-62e4d6c94e85"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aab062d2-d407-4561-9250-62e4d6c94e85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8220e13c-9e91-4163-bef0-dfc87d78b10b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8220e13c-9e91-4163-bef0-dfc87d78b10b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:14.439 /home/vagrant/spdk_repo/spdk 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:14.439 00:16:14.439 real 0m11.965s 00:16:14.439 user 0m25.821s 00:16:14.439 sys 0m20.601s 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.439 ************************************ 00:16:14.439 END TEST bdev_fio 00:16:14.439 ************************************ 00:16:14.439 14:09:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:14.439 14:09:15 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:14.440 14:09:15 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:14.440 14:09:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:14.440 14:09:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.440 14:09:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:14.440 ************************************ 00:16:14.440 START TEST bdev_verify 00:16:14.440 ************************************ 00:16:14.440 14:09:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:14.440 [2024-12-09 14:09:15.744340] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:16:14.440 [2024-12-09 14:09:15.744450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72960 ] 00:16:14.440 [2024-12-09 14:09:15.899929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:14.440 [2024-12-09 14:09:15.995916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.440 [2024-12-09 14:09:15.996007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.700 Running I/O for 5 seconds... 00:16:17.027 20848.00 IOPS, 81.44 MiB/s [2024-12-09T14:09:19.764Z] 22152.00 IOPS, 86.53 MiB/s [2024-12-09T14:09:20.708Z] 22117.33 IOPS, 86.40 MiB/s [2024-12-09T14:09:21.659Z] 22153.75 IOPS, 86.54 MiB/s [2024-12-09T14:09:21.659Z] 22355.20 IOPS, 87.33 MiB/s 00:16:19.865 Latency(us) 00:16:19.865 [2024-12-09T14:09:21.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.865 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0x0 length 0xbd0bd 00:16:19.865 nvme0n1 : 5.04 2267.20 8.86 0.00 0.00 56150.51 5419.32 70577.23 00:16:19.865 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:19.865 nvme0n1 : 5.05 2180.60 8.52 0.00 0.00 58542.54 4965.61 81062.99 00:16:19.865 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0x0 length 0x20000 00:16:19.865 nvme1n1 : 5.03 1805.72 7.05 0.00 0.00 70456.32 6125.10 70173.93 00:16:19.865 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0x20000 length 0x20000 00:16:19.865 nvme1n1 : 5.04 1776.60 6.94 0.00 0.00 71691.79 4789.17 74610.22 00:16:19.865 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0x0 length 0x80000 00:16:19.865 nvme2n1 : 5.06 1796.11 7.02 0.00 0.00 70641.05 6856.07 65737.65 00:16:19.865 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0x80000 length 0x80000 00:16:19.865 nvme2n1 : 5.04 1725.30 6.74 0.00 0.00 73685.99 7108.14 75820.11 00:16:19.865 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0x0 length 0x80000 00:16:19.865 nvme2n2 : 5.07 1792.99 7.00 0.00 0.00 70590.56 7158.55 64931.05 00:16:19.865 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0x80000 length 0x80000 00:16:19.865 nvme2n2 : 5.05 1724.73 6.74 0.00 0.00 73541.59 9981.64 77030.01 00:16:19.865 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0x0 length 0x80000 00:16:19.865 nvme2n3 : 5.06 1794.53 7.01 0.00 0.00 70358.78 7864.32 70980.53 00:16:19.865 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0x80000 length 0x80000 00:16:19.865 nvme2n3 : 5.05 1723.69 6.73 0.00 0.00 73406.41 6301.54 83079.48 00:16:19.865 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0x0 length 0xa0000 00:16:19.865 nvme3n1 : 5.08 1839.29 7.18 0.00 0.00 68543.27 1140.58 77433.30 00:16:19.865 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:19.865 Verification LBA range: start 0xa0000 length 0xa0000 00:16:19.865 nvme3n1 : 5.06 1744.68 6.82 0.00 0.00 72367.51 2848.30 80659.69 00:16:19.865 [2024-12-09T14:09:21.659Z] =================================================================================================================== 00:16:19.865 [2024-12-09T14:09:21.659Z] Total : 22171.43 86.61 0.00 0.00 68654.63 1140.58 83079.48 00:16:20.808 00:16:20.808 real 0m6.547s 00:16:20.808 user 0m11.025s 00:16:20.808 sys 0m1.145s 00:16:20.808 ************************************ 00:16:20.808 END TEST bdev_verify 00:16:20.808 ************************************ 00:16:20.808 14:09:22 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.808 14:09:22 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:20.808 14:09:22 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:20.808 14:09:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:20.808 14:09:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.808 14:09:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.808 ************************************ 00:16:20.808 START TEST bdev_verify_big_io 00:16:20.808 ************************************ 00:16:20.808 14:09:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:20.808 [2024-12-09 14:09:22.359437] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:16:20.808 [2024-12-09 14:09:22.359573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73060 ] 00:16:20.808 [2024-12-09 14:09:22.520745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:21.069 [2024-12-09 14:09:22.619502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.069 [2024-12-09 14:09:22.619555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.330 Running I/O for 5 seconds... 00:16:26.530 847.00 IOPS, 52.94 MiB/s [2024-12-09T14:09:29.259Z] 2032.00 IOPS, 127.00 MiB/s [2024-12-09T14:09:29.259Z] 2560.33 IOPS, 160.02 MiB/s 00:16:27.465 Latency(us) 00:16:27.465 [2024-12-09T14:09:29.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.465 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0x0 length 0xbd0b 00:16:27.465 nvme0n1 : 5.83 120.77 7.55 0.00 0.00 1027466.96 12855.14 1884210.41 00:16:27.465 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:27.465 nvme0n1 : 5.94 166.93 10.43 0.00 0.00 727646.89 6276.33 1045349.61 00:16:27.465 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0x0 length 0x2000 00:16:27.465 nvme1n1 : 5.75 108.59 6.79 0.00 0.00 1082501.22 98404.82 1690627.15 00:16:27.465 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0x2000 length 0x2000 00:16:27.465 nvme1n1 : 5.75 100.22 6.26 0.00 0.00 1202509.28 76626.71 2013265.92 00:16:27.465 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0x0 length 0x8000 00:16:27.465 nvme2n1 : 5.75 133.56 8.35 0.00 0.00 875021.13 107277.39 816276.09 00:16:27.465 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0x8000 length 0x8000 00:16:27.465 nvme2n1 : 5.94 113.15 7.07 0.00 0.00 1020770.61 67754.14 2413337.99 00:16:27.465 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0x0 length 0x8000 00:16:27.465 nvme2n2 : 5.91 116.40 7.28 0.00 0.00 960565.78 64124.46 2335904.69 00:16:27.465 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0x8000 length 0x8000 00:16:27.465 nvme2n2 : 5.94 118.45 7.40 0.00 0.00 945548.82 11191.53 1664816.05 00:16:27.465 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0x0 length 0x8000 00:16:27.465 nvme2n3 : 5.94 123.81 7.74 0.00 0.00 885200.94 30449.03 1716438.25 00:16:27.465 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0x8000 length 0x8000 00:16:27.465 nvme2n3 : 5.95 129.13 8.07 0.00 0.00 851331.54 4637.93 1451874.46 00:16:27.465 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0x0 length 0xa000 00:16:27.465 nvme3n1 : 5.95 169.41 10.59 0.00 0.00 627209.41 545.08 1703532.70 00:16:27.465 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:27.465 Verification LBA range: start 0xa000 length 0xa000 00:16:27.465 nvme3n1 : 5.95 126.30 7.89 0.00 0.00 840430.54 3780.92 2181038.08 00:16:27.465 [2024-12-09T14:09:29.259Z] =================================================================================================================== 00:16:27.465 [2024-12-09T14:09:29.259Z] Total : 1526.72 95.42 0.00 0.00 897189.63 545.08 2413337.99 00:16:28.031 00:16:28.031 real 0m7.450s 00:16:28.031 user 0m13.821s 00:16:28.031 sys 0m0.352s 00:16:28.031 14:09:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.031 ************************************ 00:16:28.031 END TEST bdev_verify_big_io 00:16:28.031 ************************************ 00:16:28.031 14:09:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.031 14:09:29 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:28.031 14:09:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:28.031 14:09:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.031 14:09:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:28.031 ************************************ 00:16:28.031 START TEST bdev_write_zeroes 00:16:28.031 ************************************ 00:16:28.031 14:09:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:28.289 [2024-12-09 14:09:29.863742] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:16:28.289 [2024-12-09 14:09:29.863831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73170 ] 00:16:28.289 [2024-12-09 14:09:30.009699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.547 [2024-12-09 14:09:30.087628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.805 Running I/O for 1 seconds... 00:16:29.745 74528.00 IOPS, 291.12 MiB/s 00:16:29.745 Latency(us) 00:16:29.745 [2024-12-09T14:09:31.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.745 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.745 nvme0n1 : 1.02 19776.87 77.25 0.00 0.00 6462.54 2306.36 17946.78 00:16:29.745 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.745 nvme1n1 : 1.03 10788.52 42.14 0.00 0.00 11802.87 3302.01 22080.59 00:16:29.745 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.745 nvme2n1 : 1.03 10776.14 42.09 0.00 0.00 11810.76 3503.66 20870.70 00:16:29.745 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.745 nvme2n2 : 1.03 10764.34 42.05 0.00 0.00 11816.26 3654.89 19761.62 00:16:29.745 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.745 nvme2n3 : 1.04 10752.52 42.00 0.00 0.00 11822.66 3856.54 19660.80 00:16:29.745 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.745 nvme3n1 : 1.03 10822.25 42.27 0.00 0.00 11734.96 4058.19 25811.10 00:16:29.745 [2024-12-09T14:09:31.539Z] =================================================================================================================== 00:16:29.745 [2024-12-09T14:09:31.539Z] Total : 73680.64 287.82 0.00 0.00 10374.40 2306.36 25811.10 00:16:30.687 00:16:30.687 real 0m2.349s 00:16:30.687 user 0m1.658s 00:16:30.687 sys 0m0.525s 00:16:30.687 14:09:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.687 14:09:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:30.687 ************************************ 00:16:30.687 END TEST bdev_write_zeroes 00:16:30.687 ************************************ 00:16:30.687 14:09:32 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:30.687 14:09:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:30.687 14:09:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.687 14:09:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.687 ************************************ 00:16:30.687 START TEST bdev_json_nonenclosed 00:16:30.687 ************************************ 00:16:30.687 14:09:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:30.687 [2024-12-09 14:09:32.287592] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:16:30.687 [2024-12-09 14:09:32.287706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73215 ] 00:16:30.687 [2024-12-09 14:09:32.447813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.949 [2024-12-09 14:09:32.543933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.949 [2024-12-09 14:09:32.544006] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:30.949 [2024-12-09 14:09:32.544022] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:30.949 [2024-12-09 14:09:32.544031] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:30.949 00:16:30.949 real 0m0.500s 00:16:30.949 user 0m0.304s 00:16:30.949 sys 0m0.091s 00:16:30.949 14:09:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.949 ************************************ 00:16:30.949 END TEST bdev_json_nonenclosed 00:16:30.949 ************************************ 00:16:30.949 14:09:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:31.209 14:09:32 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:31.209 14:09:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:31.209 14:09:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.209 14:09:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.209 ************************************ 00:16:31.209 START TEST bdev_json_nonarray 00:16:31.209 ************************************ 00:16:31.209 14:09:32 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:31.209 [2024-12-09 14:09:32.837739] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:16:31.209 [2024-12-09 14:09:32.837851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73245 ] 00:16:31.209 [2024-12-09 14:09:32.999091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.470 [2024-12-09 14:09:33.092255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.470 [2024-12-09 14:09:33.092333] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:31.470 [2024-12-09 14:09:33.092350] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:31.470 [2024-12-09 14:09:33.092358] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:31.731 00:16:31.731 real 0m0.492s 00:16:31.731 user 0m0.295s 00:16:31.731 sys 0m0.092s 00:16:31.731 ************************************ 00:16:31.731 END TEST bdev_json_nonarray 00:16:31.731 ************************************ 00:16:31.731 14:09:33 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.731 14:09:33 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:31.731 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:31.731 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:31.731 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:31.731 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:31.731 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:31.732 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:31.732 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:31.732 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:31.732 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:31.732 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:31.732 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:31.732 14:09:33 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:31.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:40.156 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:40.156 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:45.430 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:45.430 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:45.430 00:16:45.430 real 0m59.636s 00:16:45.430 user 1m17.583s 00:16:45.430 sys 0m59.473s 00:16:45.430 14:09:46 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.430 ************************************ 00:16:45.430 END TEST blockdev_xnvme 00:16:45.430 14:09:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.430 ************************************ 00:16:45.430 14:09:46 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:45.430 14:09:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:45.430 14:09:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.431 14:09:46 -- common/autotest_common.sh@10 -- # set +x 00:16:45.431 ************************************ 00:16:45.431 START TEST ublk 00:16:45.431 ************************************ 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:45.431 * Looking for test storage... 00:16:45.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:45.431 14:09:46 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.431 14:09:46 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.431 14:09:46 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.431 14:09:46 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.431 14:09:46 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.431 14:09:46 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.431 14:09:46 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.431 14:09:46 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.431 14:09:46 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.431 14:09:46 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.431 14:09:46 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.431 14:09:46 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:45.431 14:09:46 ublk -- scripts/common.sh@345 -- # : 1 00:16:45.431 14:09:46 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.431 14:09:46 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.431 14:09:46 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:45.431 14:09:46 ublk -- scripts/common.sh@353 -- # local d=1 00:16:45.431 14:09:46 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.431 14:09:46 ublk -- scripts/common.sh@355 -- # echo 1 00:16:45.431 14:09:46 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.431 14:09:46 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:45.431 14:09:46 ublk -- scripts/common.sh@353 -- # local d=2 00:16:45.431 14:09:46 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.431 14:09:46 ublk -- scripts/common.sh@355 -- # echo 2 00:16:45.431 14:09:46 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.431 14:09:46 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.431 14:09:46 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.431 14:09:46 ublk -- scripts/common.sh@368 -- # return 0 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.431 --rc genhtml_branch_coverage=1 00:16:45.431 --rc genhtml_function_coverage=1 00:16:45.431 --rc genhtml_legend=1 00:16:45.431 --rc geninfo_all_blocks=1 00:16:45.431 --rc geninfo_unexecuted_blocks=1 00:16:45.431 00:16:45.431 ' 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.431 --rc genhtml_branch_coverage=1 00:16:45.431 --rc genhtml_function_coverage=1 00:16:45.431 --rc genhtml_legend=1 00:16:45.431 --rc geninfo_all_blocks=1 00:16:45.431 --rc geninfo_unexecuted_blocks=1 00:16:45.431 00:16:45.431 ' 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.431 --rc genhtml_branch_coverage=1 00:16:45.431 --rc genhtml_function_coverage=1 00:16:45.431 --rc genhtml_legend=1 00:16:45.431 --rc geninfo_all_blocks=1 00:16:45.431 --rc geninfo_unexecuted_blocks=1 00:16:45.431 00:16:45.431 ' 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:45.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.431 --rc genhtml_branch_coverage=1 00:16:45.431 --rc genhtml_function_coverage=1 00:16:45.431 --rc genhtml_legend=1 00:16:45.431 --rc geninfo_all_blocks=1 00:16:45.431 --rc geninfo_unexecuted_blocks=1 00:16:45.431 00:16:45.431 ' 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:45.431 14:09:46 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:45.431 14:09:46 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:45.431 14:09:46 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:45.431 14:09:46 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:45.431 14:09:46 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:45.431 14:09:46 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:45.431 14:09:46 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:45.431 14:09:46 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:45.431 14:09:46 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.431 14:09:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.431 ************************************ 00:16:45.431 START TEST test_save_ublk_config 00:16:45.431 ************************************ 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73546 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73546 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73546 ']' 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.431 14:09:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:45.431 [2024-12-09 14:09:46.845969] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:16:45.431 [2024-12-09 14:09:46.846088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73546 ] 00:16:45.431 [2024-12-09 14:09:47.004682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.431 [2024-12-09 14:09:47.099676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.000 14:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.000 14:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:46.000 14:09:47 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:46.000 14:09:47 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:46.000 14:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.000 14:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:46.000 [2024-12-09 14:09:47.737563] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:46.000 [2024-12-09 14:09:47.738467] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:46.261 malloc0 00:16:46.261 [2024-12-09 14:09:47.809702] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:46.261 [2024-12-09 14:09:47.809795] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:46.261 [2024-12-09 14:09:47.809806] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:46.261 [2024-12-09 14:09:47.809814] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:46.261 [2024-12-09 14:09:47.818667] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:46.261 [2024-12-09 14:09:47.818695] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:46.261 [2024-12-09 14:09:47.825577] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:46.261 [2024-12-09 14:09:47.825698] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:46.261 [2024-12-09 14:09:47.842570] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:46.261 0 00:16:46.261 14:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.261 14:09:47 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:46.261 14:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.261 14:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:46.522 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.522 14:09:48 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:46.522 "subsystems": [ 00:16:46.522 { 00:16:46.522 "subsystem": "fsdev", 00:16:46.522 "config": [ 00:16:46.522 { 00:16:46.522 "method": "fsdev_set_opts", 00:16:46.522 "params": { 00:16:46.522 "fsdev_io_pool_size": 65535, 00:16:46.522 "fsdev_io_cache_size": 256 00:16:46.522 } 00:16:46.522 } 00:16:46.522 ] 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "subsystem": "keyring", 00:16:46.522 "config": [] 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "subsystem": "iobuf", 00:16:46.522 "config": [ 00:16:46.522 { 00:16:46.522 "method": "iobuf_set_options", 00:16:46.522 "params": { 00:16:46.522 "small_pool_count": 8192, 00:16:46.522 "large_pool_count": 1024, 00:16:46.522 "small_bufsize": 8192, 00:16:46.522 "large_bufsize": 135168, 00:16:46.522 "enable_numa": false 00:16:46.522 } 00:16:46.522 } 00:16:46.522 ] 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "subsystem": "sock", 00:16:46.522 "config": [ 00:16:46.522 { 00:16:46.522 "method": "sock_set_default_impl", 00:16:46.522 "params": { 00:16:46.522 "impl_name": "posix" 00:16:46.522 } 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "method": "sock_impl_set_options", 00:16:46.522 "params": { 00:16:46.522 "impl_name": "ssl", 00:16:46.522 "recv_buf_size": 4096, 00:16:46.522 "send_buf_size": 4096, 00:16:46.522 "enable_recv_pipe": true, 00:16:46.522 "enable_quickack": false, 00:16:46.522 "enable_placement_id": 0, 00:16:46.522 "enable_zerocopy_send_server": true, 00:16:46.522 "enable_zerocopy_send_client": false, 00:16:46.522 "zerocopy_threshold": 0, 00:16:46.522 "tls_version": 0, 00:16:46.522 "enable_ktls": false 00:16:46.522 } 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "method": "sock_impl_set_options", 00:16:46.522 "params": { 00:16:46.522 "impl_name": "posix", 00:16:46.522 "recv_buf_size": 2097152, 00:16:46.522 "send_buf_size": 2097152, 00:16:46.522 "enable_recv_pipe": true, 00:16:46.522 "enable_quickack": false, 00:16:46.522 "enable_placement_id": 0, 00:16:46.522 "enable_zerocopy_send_server": true, 00:16:46.522 "enable_zerocopy_send_client": false, 00:16:46.522 "zerocopy_threshold": 0, 00:16:46.522 "tls_version": 0, 00:16:46.522 "enable_ktls": false 00:16:46.522 } 00:16:46.522 } 00:16:46.522 ] 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "subsystem": "vmd", 00:16:46.522 "config": [] 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "subsystem": "accel", 00:16:46.522 "config": [ 00:16:46.522 { 00:16:46.522 "method": "accel_set_options", 00:16:46.522 "params": { 00:16:46.522 "small_cache_size": 128, 00:16:46.522 "large_cache_size": 16, 00:16:46.522 "task_count": 2048, 00:16:46.522 "sequence_count": 2048, 00:16:46.522 "buf_count": 2048 00:16:46.522 } 00:16:46.522 } 00:16:46.522 ] 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "subsystem": "bdev", 00:16:46.522 "config": [ 00:16:46.522 { 00:16:46.522 "method": "bdev_set_options", 00:16:46.522 "params": { 00:16:46.522 "bdev_io_pool_size": 65535, 00:16:46.522 "bdev_io_cache_size": 256, 00:16:46.522 "bdev_auto_examine": true, 00:16:46.522 "iobuf_small_cache_size": 128, 00:16:46.522 "iobuf_large_cache_size": 16 00:16:46.522 } 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "method": "bdev_raid_set_options", 00:16:46.522 "params": { 00:16:46.522 "process_window_size_kb": 1024, 00:16:46.522 "process_max_bandwidth_mb_sec": 0 00:16:46.522 } 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "method": "bdev_iscsi_set_options", 00:16:46.522 "params": { 00:16:46.522 "timeout_sec": 30 00:16:46.522 } 00:16:46.522 }, 00:16:46.522 { 00:16:46.522 "method": "bdev_nvme_set_options", 00:16:46.522 "params": { 00:16:46.522 "action_on_timeout": "none", 00:16:46.522 "timeout_us": 0, 00:16:46.522 "timeout_admin_us": 0, 00:16:46.522 "keep_alive_timeout_ms": 10000, 00:16:46.522 "arbitration_burst": 0, 00:16:46.522 "low_priority_weight": 0, 00:16:46.522 "medium_priority_weight": 0, 00:16:46.522 "high_priority_weight": 0, 00:16:46.522 "nvme_adminq_poll_period_us": 10000, 00:16:46.522 "nvme_ioq_poll_period_us": 0, 00:16:46.522 "io_queue_requests": 0, 00:16:46.522 "delay_cmd_submit": true, 00:16:46.522 "transport_retry_count": 4, 00:16:46.523 "bdev_retry_count": 3, 00:16:46.523 "transport_ack_timeout": 0, 00:16:46.523 "ctrlr_loss_timeout_sec": 0, 00:16:46.523 "reconnect_delay_sec": 0, 00:16:46.523 "fast_io_fail_timeout_sec": 0, 00:16:46.523 "disable_auto_failback": false, 00:16:46.523 "generate_uuids": false, 00:16:46.523 "transport_tos": 0, 00:16:46.523 "nvme_error_stat": false, 00:16:46.523 "rdma_srq_size": 0, 00:16:46.523 "io_path_stat": false, 00:16:46.523 "allow_accel_sequence": false, 00:16:46.523 "rdma_max_cq_size": 0, 00:16:46.523 "rdma_cm_event_timeout_ms": 0, 00:16:46.523 "dhchap_digests": [ 00:16:46.523 "sha256", 00:16:46.523 "sha384", 00:16:46.523 "sha512" 00:16:46.523 ], 00:16:46.523 "dhchap_dhgroups": [ 00:16:46.523 "null", 00:16:46.523 "ffdhe2048", 00:16:46.523 "ffdhe3072", 00:16:46.523 "ffdhe4096", 00:16:46.523 "ffdhe6144", 00:16:46.523 "ffdhe8192" 00:16:46.523 ] 00:16:46.523 } 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "method": "bdev_nvme_set_hotplug", 00:16:46.523 "params": { 00:16:46.523 "period_us": 100000, 00:16:46.523 "enable": false 00:16:46.523 } 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "method": "bdev_malloc_create", 00:16:46.523 "params": { 00:16:46.523 "name": "malloc0", 00:16:46.523 "num_blocks": 8192, 00:16:46.523 "block_size": 4096, 00:16:46.523 "physical_block_size": 4096, 00:16:46.523 "uuid": "be6d0fec-41fa-481e-a1ef-b6f6fa704c82", 00:16:46.523 "optimal_io_boundary": 0, 00:16:46.523 "md_size": 0, 00:16:46.523 "dif_type": 0, 00:16:46.523 "dif_is_head_of_md": false, 00:16:46.523 "dif_pi_format": 0 00:16:46.523 } 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "method": "bdev_wait_for_examine" 00:16:46.523 } 00:16:46.523 ] 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "subsystem": "scsi", 00:16:46.523 "config": null 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "subsystem": "scheduler", 00:16:46.523 "config": [ 00:16:46.523 { 00:16:46.523 "method": "framework_set_scheduler", 00:16:46.523 "params": { 00:16:46.523 "name": "static" 00:16:46.523 } 00:16:46.523 } 00:16:46.523 ] 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "subsystem": "vhost_scsi", 00:16:46.523 "config": [] 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "subsystem": "vhost_blk", 00:16:46.523 "config": [] 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "subsystem": "ublk", 00:16:46.523 "config": [ 00:16:46.523 { 00:16:46.523 "method": "ublk_create_target", 00:16:46.523 "params": { 00:16:46.523 "cpumask": "1" 00:16:46.523 } 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "method": "ublk_start_disk", 00:16:46.523 "params": { 00:16:46.523 "bdev_name": "malloc0", 00:16:46.523 "ublk_id": 0, 00:16:46.523 "num_queues": 1, 00:16:46.523 "queue_depth": 128 00:16:46.523 } 00:16:46.523 } 00:16:46.523 ] 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "subsystem": "nbd", 00:16:46.523 "config": [] 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "subsystem": "nvmf", 00:16:46.523 "config": [ 00:16:46.523 { 00:16:46.523 "method": "nvmf_set_config", 00:16:46.523 "params": { 00:16:46.523 "discovery_filter": "match_any", 00:16:46.523 "admin_cmd_passthru": { 00:16:46.523 "identify_ctrlr": false 00:16:46.523 }, 00:16:46.523 "dhchap_digests": [ 00:16:46.523 "sha256", 00:16:46.523 "sha384", 00:16:46.523 "sha512" 00:16:46.523 ], 00:16:46.523 "dhchap_dhgroups": [ 00:16:46.523 "null", 00:16:46.523 "ffdhe2048", 00:16:46.523 "ffdhe3072", 00:16:46.523 "ffdhe4096", 00:16:46.523 "ffdhe6144", 00:16:46.523 "ffdhe8192" 00:16:46.523 ] 00:16:46.523 } 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "method": "nvmf_set_max_subsystems", 00:16:46.523 "params": { 00:16:46.523 "max_subsystems": 1024 00:16:46.523 } 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "method": "nvmf_set_crdt", 00:16:46.523 "params": { 00:16:46.523 "crdt1": 0, 00:16:46.523 "crdt2": 0, 00:16:46.523 "crdt3": 0 00:16:46.523 } 00:16:46.523 } 00:16:46.523 ] 00:16:46.523 }, 00:16:46.523 { 00:16:46.523 "subsystem": "iscsi", 00:16:46.523 "config": [ 00:16:46.523 { 00:16:46.523 "method": "iscsi_set_options", 00:16:46.523 "params": { 00:16:46.523 "node_base": "iqn.2016-06.io.spdk", 00:16:46.523 "max_sessions": 128, 00:16:46.523 "max_connections_per_session": 2, 00:16:46.523 "max_queue_depth": 64, 00:16:46.523 "default_time2wait": 2, 00:16:46.523 "default_time2retain": 20, 00:16:46.523 "first_burst_length": 8192, 00:16:46.523 "immediate_data": true, 00:16:46.523 "allow_duplicated_isid": false, 00:16:46.523 "error_recovery_level": 0, 00:16:46.523 "nop_timeout": 60, 00:16:46.523 "nop_in_interval": 30, 00:16:46.523 "disable_chap": false, 00:16:46.523 "require_chap": false, 00:16:46.523 "mutual_chap": false, 00:16:46.523 "chap_group": 0, 00:16:46.523 "max_large_datain_per_connection": 64, 00:16:46.523 "max_r2t_per_connection": 4, 00:16:46.523 "pdu_pool_size": 36864, 00:16:46.523 "immediate_data_pool_size": 16384, 00:16:46.523 "data_out_pool_size": 2048 00:16:46.523 } 00:16:46.523 } 00:16:46.523 ] 00:16:46.523 } 00:16:46.523 ] 00:16:46.523 }' 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73546 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73546 ']' 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73546 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73546 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.523 killing process with pid 73546 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73546' 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73546 00:16:46.523 14:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73546 00:16:47.486 [2024-12-09 14:09:49.264576] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:47.748 [2024-12-09 14:09:49.298655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:47.748 [2024-12-09 14:09:49.298823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:47.748 [2024-12-09 14:09:49.307593] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:47.748 [2024-12-09 14:09:49.307656] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:47.748 [2024-12-09 14:09:49.307670] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:47.748 [2024-12-09 14:09:49.307701] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:47.748 [2024-12-09 14:09:49.307857] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:49.127 14:09:50 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73603 00:16:49.127 14:09:50 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73603 00:16:49.127 14:09:50 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73603 ']' 00:16:49.127 14:09:50 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.127 14:09:50 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.127 14:09:50 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.127 14:09:50 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.127 14:09:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:49.127 14:09:50 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:49.127 "subsystems": [ 00:16:49.127 { 00:16:49.127 "subsystem": "fsdev", 00:16:49.127 "config": [ 00:16:49.127 { 00:16:49.127 "method": "fsdev_set_opts", 00:16:49.127 "params": { 00:16:49.127 "fsdev_io_pool_size": 65535, 00:16:49.127 "fsdev_io_cache_size": 256 00:16:49.127 } 00:16:49.127 } 00:16:49.127 ] 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "subsystem": "keyring", 00:16:49.127 "config": [] 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "subsystem": "iobuf", 00:16:49.127 "config": [ 00:16:49.127 { 00:16:49.127 "method": "iobuf_set_options", 00:16:49.127 "params": { 00:16:49.127 "small_pool_count": 8192, 00:16:49.127 "large_pool_count": 1024, 00:16:49.127 "small_bufsize": 8192, 00:16:49.127 "large_bufsize": 135168, 00:16:49.127 "enable_numa": false 00:16:49.127 } 00:16:49.127 } 00:16:49.127 ] 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "subsystem": "sock", 00:16:49.127 "config": [ 00:16:49.127 { 00:16:49.127 "method": "sock_set_default_impl", 00:16:49.127 "params": { 00:16:49.127 "impl_name": "posix" 00:16:49.127 } 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "method": "sock_impl_set_options", 00:16:49.127 "params": { 00:16:49.127 "impl_name": "ssl", 00:16:49.127 "recv_buf_size": 4096, 00:16:49.127 "send_buf_size": 4096, 00:16:49.127 "enable_recv_pipe": true, 00:16:49.127 "enable_quickack": false, 00:16:49.127 "enable_placement_id": 0, 00:16:49.127 "enable_zerocopy_send_server": true, 00:16:49.127 "enable_zerocopy_send_client": false, 00:16:49.127 "zerocopy_threshold": 0, 00:16:49.127 "tls_version": 0, 00:16:49.127 "enable_ktls": false 00:16:49.127 } 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "method": "sock_impl_set_options", 00:16:49.127 "params": { 00:16:49.127 "impl_name": "posix", 00:16:49.127 "recv_buf_size": 2097152, 00:16:49.127 "send_buf_size": 2097152, 00:16:49.127 "enable_recv_pipe": true, 00:16:49.127 "enable_quickack": false, 00:16:49.127 "enable_placement_id": 0, 00:16:49.127 "enable_zerocopy_send_server": true, 00:16:49.127 "enable_zerocopy_send_client": false, 00:16:49.127 "zerocopy_threshold": 0, 00:16:49.127 "tls_version": 0, 00:16:49.127 "enable_ktls": false 00:16:49.127 } 00:16:49.127 } 00:16:49.127 ] 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "subsystem": "vmd", 00:16:49.127 "config": [] 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "subsystem": "accel", 00:16:49.127 "config": [ 00:16:49.127 { 00:16:49.127 "method": "accel_set_options", 00:16:49.127 "params": { 00:16:49.127 "small_cache_size": 128, 00:16:49.127 "large_cache_size": 16, 00:16:49.127 "task_count": 2048, 00:16:49.127 "sequence_count": 2048, 00:16:49.127 "buf_count": 2048 00:16:49.127 } 00:16:49.127 } 00:16:49.127 ] 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "subsystem": "bdev", 00:16:49.127 "config": [ 00:16:49.127 { 00:16:49.127 "method": "bdev_set_options", 00:16:49.127 "params": { 00:16:49.127 "bdev_io_pool_size": 65535, 00:16:49.127 "bdev_io_cache_size": 256, 00:16:49.127 "bdev_auto_examine": true, 00:16:49.127 "iobuf_small_cache_size": 128, 00:16:49.127 "iobuf_large_cache_size": 16 00:16:49.127 } 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "method": "bdev_raid_set_options", 00:16:49.127 "params": { 00:16:49.127 "process_window_size_kb": 1024, 00:16:49.127 "process_max_bandwidth_mb_sec": 0 00:16:49.127 } 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "method": "bdev_iscsi_set_options", 00:16:49.127 "params": { 00:16:49.127 "timeout_sec": 30 00:16:49.127 } 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "method": "bdev_nvme_set_options", 00:16:49.127 "params": { 00:16:49.127 "action_on_timeout": "none", 00:16:49.127 "timeout_us": 0, 00:16:49.127 "timeout_admin_us": 0, 00:16:49.127 "keep_alive_timeout_ms": 10000, 00:16:49.127 "arbitration_burst": 0, 00:16:49.127 "low_priority_weight": 0, 00:16:49.127 "medium_priority_weight": 0, 00:16:49.127 "high_priority_weight": 0, 00:16:49.127 "nvme_adminq_poll_period_us": 10000, 00:16:49.127 "nvme_ioq_poll_period_us": 0, 00:16:49.127 "io_queue_requests": 0, 00:16:49.127 "delay_cmd_submit": true, 00:16:49.127 "transport_retry_count": 4, 00:16:49.127 "bdev_retry_count": 3, 00:16:49.127 "transport_ack_timeout": 0, 00:16:49.127 "ctrlr_loss_timeout_sec": 0, 00:16:49.127 "reconnect_delay_sec": 0, 00:16:49.127 "fast_io_fail_timeout_sec": 0, 00:16:49.127 "disable_auto_failback": false, 00:16:49.127 "generate_uuids": false, 00:16:49.127 "transport_tos": 0, 00:16:49.127 "nvme_error_stat": false, 00:16:49.127 "rdma_srq_size": 0, 00:16:49.127 "io_path_stat": false, 00:16:49.127 "allow_accel_sequence": false, 00:16:49.127 "rdma_max_cq_size": 0, 00:16:49.127 "rdma_cm_event_timeout_ms": 0, 00:16:49.127 "dhchap_digests": [ 00:16:49.127 "sha256", 00:16:49.127 "sha384", 00:16:49.127 "sha512" 00:16:49.127 ], 00:16:49.127 "dhchap_dhgroups": [ 00:16:49.127 "null", 00:16:49.127 "ffdhe2048", 00:16:49.127 "ffdhe3072", 00:16:49.127 "ffdhe4096", 00:16:49.127 "ffdhe6144", 00:16:49.127 "ffdhe8192" 00:16:49.127 ] 00:16:49.127 } 00:16:49.127 }, 00:16:49.127 { 00:16:49.127 "method": "bdev_nvme_set_hotplug", 00:16:49.127 "params": { 00:16:49.127 "period_us": 100000, 00:16:49.128 "enable": false 00:16:49.128 } 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "method": "bdev_malloc_create", 00:16:49.128 "params": { 00:16:49.128 "name": "malloc0", 00:16:49.128 "num_blocks": 8192, 00:16:49.128 "block_size": 4096, 00:16:49.128 "physical_block_size": 4096, 00:16:49.128 "uuid": "be6d0fec-41fa-481e-a1ef-b6f6fa704c82", 00:16:49.128 "optimal_io_boundary": 0, 00:16:49.128 "md_size": 0, 00:16:49.128 "dif_type": 0, 00:16:49.128 "dif_is_head_of_md": false, 00:16:49.128 "dif_pi_format": 0 00:16:49.128 } 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "method": "bdev_wait_for_examine" 00:16:49.128 } 00:16:49.128 ] 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "subsystem": "scsi", 00:16:49.128 "config": null 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "subsystem": "scheduler", 00:16:49.128 "config": [ 00:16:49.128 { 00:16:49.128 "method": "framework_set_scheduler", 00:16:49.128 "params": { 00:16:49.128 "name": "static" 00:16:49.128 } 00:16:49.128 } 00:16:49.128 ] 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "subsystem": "vhost_scsi", 00:16:49.128 "config": [] 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "subsystem": "vhost_blk", 00:16:49.128 "config": [] 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "subsystem": "ublk", 00:16:49.128 "config": [ 00:16:49.128 { 00:16:49.128 "method": "ublk_create_target", 00:16:49.128 "params": { 00:16:49.128 "cpumask": "1" 00:16:49.128 } 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "method": "ublk_start_disk", 00:16:49.128 "params": { 00:16:49.128 "bdev_name": "malloc0", 00:16:49.128 "ublk_id": 0, 00:16:49.128 "num_queues": 1, 00:16:49.128 "queue_depth": 128 00:16:49.128 } 00:16:49.128 } 00:16:49.128 ] 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "subsystem": "nbd", 00:16:49.128 "config": [] 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "subsystem": "nvmf", 00:16:49.128 "config": [ 00:16:49.128 { 00:16:49.128 "method": "nvmf_set_config", 00:16:49.128 "params": { 00:16:49.128 "discovery_filter": "match_any", 00:16:49.128 "admin_cmd_passthru": { 00:16:49.128 "identify_ctrlr": false 00:16:49.128 }, 00:16:49.128 "dhchap_digests": [ 00:16:49.128 "sha256", 00:16:49.128 "sha384", 00:16:49.128 "sha512" 00:16:49.128 ], 00:16:49.128 "dhchap_dhgroups": [ 00:16:49.128 "null", 00:16:49.128 "ffdhe2048", 00:16:49.128 "ffdhe3072", 00:16:49.128 "ffdhe4096", 00:16:49.128 "ffdhe6144", 00:16:49.128 "ffdhe8192" 00:16:49.128 ] 00:16:49.128 } 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "method": "nvmf_set_max_subsystems", 00:16:49.128 "params": { 00:16:49.128 "max_subsystems": 1024 00:16:49.128 } 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "method": "nvmf_set_crdt", 00:16:49.128 "params": { 00:16:49.128 "crdt1": 0, 00:16:49.128 "crdt2": 0, 00:16:49.128 "crdt3": 0 00:16:49.128 } 00:16:49.128 } 00:16:49.128 ] 00:16:49.128 }, 00:16:49.128 { 00:16:49.128 "subsystem": "iscsi", 00:16:49.128 "config": [ 00:16:49.128 { 00:16:49.128 "method": "iscsi_set_options", 00:16:49.128 "params": { 00:16:49.128 "node_base": "iqn.2016-06.io.spdk", 00:16:49.128 "max_sessions": 128, 00:16:49.128 "max_connections_per_session": 2, 00:16:49.128 "max_queue_depth": 64, 00:16:49.128 "default_time2wait": 2, 00:16:49.128 "default_time2retain": 20, 00:16:49.128 "first_burst_length": 8192, 00:16:49.128 "immediate_data": true, 00:16:49.128 "allow_duplicated_isid": false, 00:16:49.128 "error_recovery_level": 0, 00:16:49.128 "nop_timeout": 60, 00:16:49.128 "nop_in_interval": 30, 00:16:49.128 "disable_chap": false, 00:16:49.128 "require_chap": false, 00:16:49.128 "mutual_chap": false, 00:16:49.128 "chap_group": 0, 00:16:49.128 "max_large_datain_per_connection": 64, 00:16:49.128 "max_r2t_per_connection": 4, 00:16:49.128 "pdu_pool_size": 36864, 00:16:49.128 "immediate_data_pool_size": 16384, 00:16:49.128 "data_out_pool_size": 2048 00:16:49.128 } 00:16:49.128 } 00:16:49.128 ] 00:16:49.128 } 00:16:49.128 ] 00:16:49.128 }' 00:16:49.128 14:09:50 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:49.128 [2024-12-09 14:09:50.654763] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:16:49.128 [2024-12-09 14:09:50.654887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73603 ] 00:16:49.128 [2024-12-09 14:09:50.809920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.128 [2024-12-09 14:09:50.895243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.063 [2024-12-09 14:09:51.533552] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:50.063 [2024-12-09 14:09:51.534183] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:50.063 [2024-12-09 14:09:51.541634] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:50.063 [2024-12-09 14:09:51.541691] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:50.063 [2024-12-09 14:09:51.541699] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:50.063 [2024-12-09 14:09:51.541704] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:50.063 [2024-12-09 14:09:51.550602] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:50.063 [2024-12-09 14:09:51.550620] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:50.063 [2024-12-09 14:09:51.557558] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:50.063 [2024-12-09 14:09:51.557627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:50.063 [2024-12-09 14:09:51.574555] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73603 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73603 ']' 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73603 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73603 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.063 killing process with pid 73603 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73603' 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73603 00:16:50.063 14:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73603 00:16:50.998 [2024-12-09 14:09:52.663147] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:50.998 [2024-12-09 14:09:52.701616] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:50.998 [2024-12-09 14:09:52.701707] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:50.998 [2024-12-09 14:09:52.709555] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:50.998 [2024-12-09 14:09:52.709593] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:50.998 [2024-12-09 14:09:52.709599] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:50.998 [2024-12-09 14:09:52.709619] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:50.998 [2024-12-09 14:09:52.709726] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:52.374 14:09:53 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:52.374 00:16:52.374 real 0m7.107s 00:16:52.374 user 0m4.947s 00:16:52.374 sys 0m2.792s 00:16:52.374 14:09:53 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.374 14:09:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:52.374 ************************************ 00:16:52.374 END TEST test_save_ublk_config 00:16:52.374 ************************************ 00:16:52.374 14:09:53 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73670 00:16:52.374 14:09:53 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:52.374 14:09:53 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73670 00:16:52.374 14:09:53 ublk -- common/autotest_common.sh@835 -- # '[' -z 73670 ']' 00:16:52.374 14:09:53 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.374 14:09:53 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.374 14:09:53 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.374 14:09:53 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.374 14:09:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.374 14:09:53 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:52.374 [2024-12-09 14:09:53.998080] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:16:52.374 [2024-12-09 14:09:53.998209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73670 ] 00:16:52.374 [2024-12-09 14:09:54.154703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:52.632 [2024-12-09 14:09:54.241391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.632 [2024-12-09 14:09:54.241494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.199 14:09:54 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.199 14:09:54 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:53.199 14:09:54 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:53.199 14:09:54 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:53.199 14:09:54 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.199 14:09:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.199 ************************************ 00:16:53.199 START TEST test_create_ublk 00:16:53.199 ************************************ 00:16:53.199 14:09:54 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:53.199 14:09:54 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:53.199 14:09:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.199 14:09:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.199 [2024-12-09 14:09:54.851552] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:53.199 [2024-12-09 14:09:54.853063] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:53.199 14:09:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.199 14:09:54 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:53.199 14:09:54 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:53.199 14:09:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.199 14:09:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.459 14:09:55 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:53.459 14:09:55 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.459 14:09:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.459 [2024-12-09 14:09:55.014659] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:53.459 [2024-12-09 14:09:55.014948] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:53.459 [2024-12-09 14:09:55.014961] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:53.459 [2024-12-09 14:09:55.014967] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:53.459 [2024-12-09 14:09:55.022567] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:53.459 [2024-12-09 14:09:55.022585] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:53.459 [2024-12-09 14:09:55.030561] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:53.459 [2024-12-09 14:09:55.031043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:53.459 [2024-12-09 14:09:55.054560] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:53.459 14:09:55 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:53.459 14:09:55 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.459 14:09:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.459 14:09:55 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:53.459 { 00:16:53.459 "ublk_device": "/dev/ublkb0", 00:16:53.459 "id": 0, 00:16:53.459 "queue_depth": 512, 00:16:53.459 "num_queues": 4, 00:16:53.459 "bdev_name": "Malloc0" 00:16:53.459 } 00:16:53.459 ]' 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:53.459 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:53.460 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:53.460 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:53.460 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:53.460 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:53.460 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:53.460 14:09:55 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:53.460 14:09:55 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:53.717 fio: verification read phase will never start because write phase uses all of runtime 00:16:53.717 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:53.717 fio-3.35 00:16:53.717 Starting 1 process 00:17:03.682 00:17:03.682 fio_test: (groupid=0, jobs=1): err= 0: pid=73713: Mon Dec 9 14:10:05 2024 00:17:03.682 write: IOPS=20.3k, BW=79.5MiB/s (83.3MB/s)(795MiB/10001msec); 0 zone resets 00:17:03.682 clat (usec): min=32, max=4002, avg=48.33, stdev=81.32 00:17:03.682 lat (usec): min=32, max=4002, avg=48.81, stdev=81.33 00:17:03.682 clat percentiles (usec): 00:17:03.682 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:17:03.682 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:17:03.682 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 53], 95.00th=[ 58], 00:17:03.682 | 99.00th=[ 67], 99.50th=[ 74], 99.90th=[ 1270], 99.95th=[ 2376], 00:17:03.682 | 99.99th=[ 3523] 00:17:03.682 bw ( KiB/s): min=74352, max=88944, per=100.00%, avg=81465.68, stdev=3144.77, samples=19 00:17:03.682 iops : min=18588, max=22236, avg=20366.42, stdev=786.19, samples=19 00:17:03.682 lat (usec) : 50=86.29%, 100=13.42%, 250=0.13%, 500=0.03%, 750=0.01% 00:17:03.682 lat (usec) : 1000=0.01% 00:17:03.682 lat (msec) : 2=0.04%, 4=0.07%, 10=0.01% 00:17:03.682 cpu : usr=3.20%, sys=17.52%, ctx=203419, majf=0, minf=797 00:17:03.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.682 issued rwts: total=0,203420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.682 00:17:03.682 Run status group 0 (all jobs): 00:17:03.682 WRITE: bw=79.5MiB/s (83.3MB/s), 79.5MiB/s-79.5MiB/s (83.3MB/s-83.3MB/s), io=795MiB (833MB), run=10001-10001msec 00:17:03.682 00:17:03.682 Disk stats (read/write): 00:17:03.682 ublkb0: ios=0/201601, merge=0/0, ticks=0/7945, in_queue=7945, util=99.10% 00:17:03.682 14:10:05 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:03.683 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.683 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.940 [2024-12-09 14:10:05.476751] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:03.940 [2024-12-09 14:10:05.519024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:03.940 [2024-12-09 14:10:05.519976] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:03.940 [2024-12-09 14:10:05.526566] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:03.940 [2024-12-09 14:10:05.526796] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:03.940 [2024-12-09 14:10:05.526810] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.940 14:10:05 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.940 [2024-12-09 14:10:05.542610] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:03.940 request: 00:17:03.940 { 00:17:03.940 "ublk_id": 0, 00:17:03.940 "method": "ublk_stop_disk", 00:17:03.940 "req_id": 1 00:17:03.940 } 00:17:03.940 Got JSON-RPC error response 00:17:03.940 response: 00:17:03.940 { 00:17:03.940 "code": -19, 00:17:03.940 "message": "No such device" 00:17:03.940 } 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:03.940 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.941 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.941 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.941 14:10:05 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:03.941 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.941 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.941 [2024-12-09 14:10:05.558610] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:03.941 [2024-12-09 14:10:05.562297] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:03.941 [2024-12-09 14:10:05.562325] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:03.941 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.941 14:10:05 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:03.941 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.941 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.199 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.199 14:10:05 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:04.199 14:10:05 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:04.199 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.199 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.199 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.199 14:10:05 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:04.199 14:10:05 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:04.199 14:10:05 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:04.199 14:10:05 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:04.199 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.199 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.199 14:10:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.199 14:10:05 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:04.199 14:10:05 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:04.458 14:10:06 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:04.458 00:17:04.458 real 0m11.163s 00:17:04.458 user 0m0.616s 00:17:04.458 sys 0m1.838s 00:17:04.458 14:10:06 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.458 14:10:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.458 ************************************ 00:17:04.458 END TEST test_create_ublk 00:17:04.458 ************************************ 00:17:04.458 14:10:06 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:04.458 14:10:06 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:04.458 14:10:06 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.458 14:10:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.458 ************************************ 00:17:04.458 START TEST test_create_multi_ublk 00:17:04.458 ************************************ 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.458 [2024-12-09 14:10:06.061548] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:04.458 [2024-12-09 14:10:06.063122] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.458 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.773 [2024-12-09 14:10:06.277659] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:04.773 [2024-12-09 14:10:06.277958] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:04.773 [2024-12-09 14:10:06.277970] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:04.773 [2024-12-09 14:10:06.277978] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:04.773 [2024-12-09 14:10:06.289597] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:04.773 [2024-12-09 14:10:06.289615] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:04.773 [2024-12-09 14:10:06.301555] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:04.773 [2024-12-09 14:10:06.302047] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:04.773 [2024-12-09 14:10:06.309674] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.773 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.773 [2024-12-09 14:10:06.539659] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:04.773 [2024-12-09 14:10:06.539954] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:04.773 [2024-12-09 14:10:06.539967] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:04.773 [2024-12-09 14:10:06.539973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.047 [2024-12-09 14:10:06.547592] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.047 [2024-12-09 14:10:06.547609] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.047 [2024-12-09 14:10:06.555557] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.047 [2024-12-09 14:10:06.556062] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:05.047 [2024-12-09 14:10:06.579559] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.047 [2024-12-09 14:10:06.739640] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:05.047 [2024-12-09 14:10:06.739936] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:05.047 [2024-12-09 14:10:06.739948] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:05.047 [2024-12-09 14:10:06.739954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.047 [2024-12-09 14:10:06.747573] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.047 [2024-12-09 14:10:06.747593] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.047 [2024-12-09 14:10:06.755552] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.047 [2024-12-09 14:10:06.756051] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:05.047 [2024-12-09 14:10:06.759760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.047 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.305 [2024-12-09 14:10:06.919663] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:05.305 [2024-12-09 14:10:06.919956] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:05.305 [2024-12-09 14:10:06.919970] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:05.305 [2024-12-09 14:10:06.919975] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.305 [2024-12-09 14:10:06.927588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.305 [2024-12-09 14:10:06.927605] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.305 [2024-12-09 14:10:06.935560] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.305 [2024-12-09 14:10:06.936048] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:05.305 [2024-12-09 14:10:06.939512] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.305 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:05.305 { 00:17:05.305 "ublk_device": "/dev/ublkb0", 00:17:05.305 "id": 0, 00:17:05.305 "queue_depth": 512, 00:17:05.305 "num_queues": 4, 00:17:05.305 "bdev_name": "Malloc0" 00:17:05.305 }, 00:17:05.305 { 00:17:05.305 "ublk_device": "/dev/ublkb1", 00:17:05.305 "id": 1, 00:17:05.305 "queue_depth": 512, 00:17:05.305 "num_queues": 4, 00:17:05.305 "bdev_name": "Malloc1" 00:17:05.305 }, 00:17:05.305 { 00:17:05.305 "ublk_device": "/dev/ublkb2", 00:17:05.305 "id": 2, 00:17:05.305 "queue_depth": 512, 00:17:05.305 "num_queues": 4, 00:17:05.305 "bdev_name": "Malloc2" 00:17:05.305 }, 00:17:05.305 { 00:17:05.305 "ublk_device": "/dev/ublkb3", 00:17:05.305 "id": 3, 00:17:05.305 "queue_depth": 512, 00:17:05.305 "num_queues": 4, 00:17:05.305 "bdev_name": "Malloc3" 00:17:05.305 } 00:17:05.305 ]' 00:17:05.306 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:05.306 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.306 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:05.306 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:05.306 14:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:05.306 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:05.306 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:05.306 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:05.306 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:05.306 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:05.306 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:05.563 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.821 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.821 [2024-12-09 14:10:07.611634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.079 [2024-12-09 14:10:07.663555] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.079 [2024-12-09 14:10:07.664252] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.079 [2024-12-09 14:10:07.667796] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.079 [2024-12-09 14:10:07.668025] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:06.079 [2024-12-09 14:10:07.668039] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:06.079 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.079 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.079 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 [2024-12-09 14:10:07.686631] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.080 [2024-12-09 14:10:07.720582] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.080 [2024-12-09 14:10:07.721215] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.080 [2024-12-09 14:10:07.733564] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.080 [2024-12-09 14:10:07.733787] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:06.080 [2024-12-09 14:10:07.733796] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 [2024-12-09 14:10:07.737722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.080 [2024-12-09 14:10:07.774581] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.080 [2024-12-09 14:10:07.775177] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.080 [2024-12-09 14:10:07.776782] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.080 [2024-12-09 14:10:07.777008] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:06.080 [2024-12-09 14:10:07.777020] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 [2024-12-09 14:10:07.787624] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.080 [2024-12-09 14:10:07.827589] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.080 [2024-12-09 14:10:07.828146] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.080 [2024-12-09 14:10:07.835568] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.080 [2024-12-09 14:10:07.835783] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:06.080 [2024-12-09 14:10:07.835796] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.080 14:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:06.337 [2024-12-09 14:10:08.027610] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:06.337 [2024-12-09 14:10:08.031222] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:06.337 [2024-12-09 14:10:08.031249] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:06.337 14:10:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:06.337 14:10:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.337 14:10:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:06.337 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.337 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.902 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.902 14:10:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.902 14:10:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:06.902 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.902 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.161 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.161 14:10:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.161 14:10:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:07.161 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.161 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.419 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.419 14:10:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.419 14:10:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:07.419 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.419 14:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:07.419 14:10:09 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:07.678 14:10:09 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:07.678 00:17:07.678 real 0m3.179s 00:17:07.678 user 0m0.826s 00:17:07.678 sys 0m0.136s 00:17:07.678 14:10:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.678 14:10:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.678 ************************************ 00:17:07.678 END TEST test_create_multi_ublk 00:17:07.678 ************************************ 00:17:07.678 14:10:09 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:07.678 14:10:09 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:07.678 14:10:09 ublk -- ublk/ublk.sh@130 -- # killprocess 73670 00:17:07.678 14:10:09 ublk -- common/autotest_common.sh@954 -- # '[' -z 73670 ']' 00:17:07.678 14:10:09 ublk -- common/autotest_common.sh@958 -- # kill -0 73670 00:17:07.678 14:10:09 ublk -- common/autotest_common.sh@959 -- # uname 00:17:07.678 14:10:09 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.678 14:10:09 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73670 00:17:07.678 14:10:09 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:07.678 14:10:09 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:07.678 killing process with pid 73670 00:17:07.678 14:10:09 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73670' 00:17:07.678 14:10:09 ublk -- common/autotest_common.sh@973 -- # kill 73670 00:17:07.678 14:10:09 ublk -- common/autotest_common.sh@978 -- # wait 73670 00:17:08.244 [2024-12-09 14:10:09.802962] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:08.244 [2024-12-09 14:10:09.803006] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:08.813 00:17:08.813 real 0m23.842s 00:17:08.813 user 0m34.339s 00:17:08.813 sys 0m9.828s 00:17:08.813 14:10:10 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.813 14:10:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.813 ************************************ 00:17:08.813 END TEST ublk 00:17:08.813 ************************************ 00:17:08.813 14:10:10 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:08.813 14:10:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:08.813 14:10:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.813 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:17:08.813 ************************************ 00:17:08.813 START TEST ublk_recovery 00:17:08.813 ************************************ 00:17:08.813 14:10:10 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:08.813 * Looking for test storage... 00:17:08.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:08.813 14:10:10 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:08.813 14:10:10 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:08.813 14:10:10 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:09.074 14:10:10 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.075 14:10:10 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:09.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.075 --rc genhtml_branch_coverage=1 00:17:09.075 --rc genhtml_function_coverage=1 00:17:09.075 --rc genhtml_legend=1 00:17:09.075 --rc geninfo_all_blocks=1 00:17:09.075 --rc geninfo_unexecuted_blocks=1 00:17:09.075 00:17:09.075 ' 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:09.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.075 --rc genhtml_branch_coverage=1 00:17:09.075 --rc genhtml_function_coverage=1 00:17:09.075 --rc genhtml_legend=1 00:17:09.075 --rc geninfo_all_blocks=1 00:17:09.075 --rc geninfo_unexecuted_blocks=1 00:17:09.075 00:17:09.075 ' 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:09.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.075 --rc genhtml_branch_coverage=1 00:17:09.075 --rc genhtml_function_coverage=1 00:17:09.075 --rc genhtml_legend=1 00:17:09.075 --rc geninfo_all_blocks=1 00:17:09.075 --rc geninfo_unexecuted_blocks=1 00:17:09.075 00:17:09.075 ' 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:09.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.075 --rc genhtml_branch_coverage=1 00:17:09.075 --rc genhtml_function_coverage=1 00:17:09.075 --rc genhtml_legend=1 00:17:09.075 --rc geninfo_all_blocks=1 00:17:09.075 --rc geninfo_unexecuted_blocks=1 00:17:09.075 00:17:09.075 ' 00:17:09.075 14:10:10 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:09.075 14:10:10 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:09.075 14:10:10 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:09.075 14:10:10 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:09.075 14:10:10 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:09.075 14:10:10 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:09.075 14:10:10 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:09.075 14:10:10 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:09.075 14:10:10 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:09.075 14:10:10 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:09.075 14:10:10 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74057 00:17:09.075 14:10:10 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.075 14:10:10 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74057 00:17:09.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74057 ']' 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.075 14:10:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.075 14:10:10 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:09.075 [2024-12-09 14:10:10.731111] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:17:09.075 [2024-12-09 14:10:10.731259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74057 ] 00:17:09.337 [2024-12-09 14:10:10.896133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:09.337 [2024-12-09 14:10:11.027243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.337 [2024-12-09 14:10:11.027375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:10.283 14:10:11 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.283 [2024-12-09 14:10:11.737568] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:10.283 [2024-12-09 14:10:11.739832] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.283 14:10:11 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.283 malloc0 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.283 14:10:11 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.283 [2024-12-09 14:10:11.857725] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:10.283 [2024-12-09 14:10:11.857850] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:10.283 [2024-12-09 14:10:11.857862] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:10.283 [2024-12-09 14:10:11.857870] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:10.283 [2024-12-09 14:10:11.866698] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:10.283 [2024-12-09 14:10:11.866728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:10.283 [2024-12-09 14:10:11.873591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:10.283 [2024-12-09 14:10:11.873769] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:10.283 [2024-12-09 14:10:11.889576] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:10.283 1 00:17:10.283 14:10:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.283 14:10:11 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:11.222 14:10:12 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74092 00:17:11.222 14:10:12 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:11.222 14:10:12 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:11.222 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.222 fio-3.35 00:17:11.222 Starting 1 process 00:17:16.489 14:10:17 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74057 00:17:16.489 14:10:17 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:21.777 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74057 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:21.777 14:10:22 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74203 00:17:21.778 14:10:22 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.778 14:10:22 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74203 00:17:21.778 14:10:22 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:21.778 14:10:22 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74203 ']' 00:17:21.778 14:10:22 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.778 14:10:22 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.778 14:10:22 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.778 14:10:22 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.778 14:10:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.778 [2024-12-09 14:10:23.000007] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:17:21.778 [2024-12-09 14:10:23.000747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74203 ] 00:17:21.778 [2024-12-09 14:10:23.165168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:21.778 [2024-12-09 14:10:23.300405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.778 [2024-12-09 14:10:23.300510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.349 14:10:24 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.349 14:10:24 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:22.349 14:10:24 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:22.349 14:10:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.349 14:10:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.349 [2024-12-09 14:10:24.019564] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:22.349 [2024-12-09 14:10:24.021922] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:22.349 14:10:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.349 14:10:24 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:22.349 14:10:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.349 14:10:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.349 malloc0 00:17:22.349 14:10:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.349 14:10:24 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:22.349 14:10:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.349 14:10:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.349 [2024-12-09 14:10:24.139721] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:22.349 [2024-12-09 14:10:24.139776] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:22.349 [2024-12-09 14:10:24.139788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:22.610 [2024-12-09 14:10:24.147609] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:22.610 [2024-12-09 14:10:24.147642] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:22.610 [2024-12-09 14:10:24.147652] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:22.610 [2024-12-09 14:10:24.147745] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:22.610 1 00:17:22.610 14:10:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.610 14:10:24 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74092 00:17:22.610 [2024-12-09 14:10:24.155578] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:22.610 [2024-12-09 14:10:24.163311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:22.610 [2024-12-09 14:10:24.170817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:22.610 [2024-12-09 14:10:24.170848] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:18.930 00:18:18.930 fio_test: (groupid=0, jobs=1): err= 0: pid=74104: Mon Dec 9 14:11:13 2024 00:18:18.930 read: IOPS=26.8k, BW=105MiB/s (110MB/s)(6281MiB/60002msec) 00:18:18.930 slat (nsec): min=978, max=279436, avg=4921.52, stdev=1651.21 00:18:18.930 clat (usec): min=637, max=6275.0k, avg=2323.33, stdev=37670.15 00:18:18.930 lat (usec): min=641, max=6275.0k, avg=2328.26, stdev=37670.15 00:18:18.930 clat percentiles (usec): 00:18:18.930 | 1.00th=[ 1663], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 1844], 00:18:18.930 | 30.00th=[ 1876], 40.00th=[ 1893], 50.00th=[ 1909], 60.00th=[ 1942], 00:18:18.930 | 70.00th=[ 1991], 80.00th=[ 2311], 90.00th=[ 2409], 95.00th=[ 2933], 00:18:18.930 | 99.00th=[ 4817], 99.50th=[ 5735], 99.90th=[ 6849], 99.95th=[ 8094], 00:18:18.930 | 99.99th=[13304] 00:18:18.930 bw ( KiB/s): min=39393, max=131696, per=100.00%, avg=119176.46, stdev=15068.86, samples=107 00:18:18.930 iops : min= 9848, max=32924, avg=29794.11, stdev=3767.23, samples=107 00:18:18.930 write: IOPS=26.8k, BW=105MiB/s (110MB/s)(6276MiB/60002msec); 0 zone resets 00:18:18.930 slat (nsec): min=949, max=424022, avg=4957.17, stdev=1729.01 00:18:18.930 clat (usec): min=523, max=6275.4k, avg=2443.94, stdev=41402.45 00:18:18.930 lat (usec): min=528, max=6275.4k, avg=2448.90, stdev=41402.44 00:18:18.930 clat percentiles (usec): 00:18:18.930 | 1.00th=[ 1696], 5.00th=[ 1860], 10.00th=[ 1893], 20.00th=[ 1926], 00:18:18.930 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 2008], 60.00th=[ 2024], 00:18:18.930 | 70.00th=[ 2073], 80.00th=[ 2409], 90.00th=[ 2507], 95.00th=[ 2900], 00:18:18.930 | 99.00th=[ 4752], 99.50th=[ 5669], 99.90th=[ 6915], 99.95th=[ 8094], 00:18:18.930 | 99.99th=[13435] 00:18:18.930 bw ( KiB/s): min=39121, max=131648, per=100.00%, avg=119060.05, stdev=15194.98, samples=107 00:18:18.930 iops : min= 9780, max=32912, avg=29765.01, stdev=3798.76, samples=107 00:18:18.930 lat (usec) : 750=0.01%, 1000=0.01% 00:18:18.930 lat (msec) : 2=60.49%, 4=37.18%, 10=2.30%, 20=0.02%, >=2000=0.01% 00:18:18.930 cpu : usr=6.09%, sys=27.22%, ctx=109052, majf=0, minf=13 00:18:18.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:18.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.930 issued rwts: total=1608059,1606663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.930 00:18:18.930 Run status group 0 (all jobs): 00:18:18.930 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=6281MiB (6587MB), run=60002-60002msec 00:18:18.930 WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=6276MiB (6581MB), run=60002-60002msec 00:18:18.930 00:18:18.930 Disk stats (read/write): 00:18:18.930 ublkb1: ios=1604741/1603401, merge=0/0, ticks=3638225/3701629, in_queue=7339855, util=99.90% 00:18:18.930 14:11:13 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.930 [2024-12-09 14:11:13.159193] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:18.930 [2024-12-09 14:11:13.203575] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:18.930 [2024-12-09 14:11:13.203710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:18.930 [2024-12-09 14:11:13.212579] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:18.930 [2024-12-09 14:11:13.212673] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:18.930 [2024-12-09 14:11:13.212683] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.930 14:11:13 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.930 [2024-12-09 14:11:13.227628] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:18.930 [2024-12-09 14:11:13.231311] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:18.930 [2024-12-09 14:11:13.231339] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.930 14:11:13 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:18.930 14:11:13 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:18.930 14:11:13 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74203 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74203 ']' 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74203 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74203 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.930 killing process with pid 74203 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74203' 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74203 00:18:18.930 14:11:13 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74203 00:18:18.930 [2024-12-09 14:11:14.399589] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:18.930 [2024-12-09 14:11:14.399632] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:18.930 00:18:18.930 real 1m5.295s 00:18:18.930 user 1m43.799s 00:18:18.930 sys 0m35.895s 00:18:18.930 14:11:15 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.930 ************************************ 00:18:18.930 END TEST ublk_recovery 00:18:18.930 ************************************ 00:18:18.930 14:11:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.930 14:11:15 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:18.930 14:11:15 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:18.930 14:11:15 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:18.930 14:11:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.930 14:11:15 -- common/autotest_common.sh@10 -- # set +x 00:18:18.930 14:11:15 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:18.930 14:11:15 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:18.930 14:11:15 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:18.930 14:11:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:18.930 14:11:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:18.930 14:11:15 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:18.931 14:11:15 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:18.931 14:11:15 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:18.931 14:11:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:18.931 14:11:15 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:18.931 14:11:15 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:18.931 14:11:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.931 14:11:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.931 14:11:15 -- common/autotest_common.sh@10 -- # set +x 00:18:18.931 ************************************ 00:18:18.931 START TEST ftl 00:18:18.931 ************************************ 00:18:18.931 14:11:15 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:18.931 * Looking for test storage... 00:18:18.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.931 14:11:15 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:18.931 14:11:15 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:18:18.931 14:11:15 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:18.931 14:11:15 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:18.931 14:11:15 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.931 14:11:15 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.931 14:11:15 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.931 14:11:15 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.931 14:11:15 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.931 14:11:15 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.931 14:11:15 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.931 14:11:15 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.931 14:11:15 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.931 14:11:15 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.931 14:11:15 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.931 14:11:15 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:18.931 14:11:15 ftl -- scripts/common.sh@345 -- # : 1 00:18:18.931 14:11:15 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.931 14:11:15 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.931 14:11:15 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:18.931 14:11:15 ftl -- scripts/common.sh@353 -- # local d=1 00:18:18.931 14:11:15 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.931 14:11:15 ftl -- scripts/common.sh@355 -- # echo 1 00:18:18.931 14:11:15 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.931 14:11:15 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:18.931 14:11:15 ftl -- scripts/common.sh@353 -- # local d=2 00:18:18.931 14:11:15 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.931 14:11:15 ftl -- scripts/common.sh@355 -- # echo 2 00:18:18.931 14:11:15 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.931 14:11:15 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.931 14:11:15 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.931 14:11:15 ftl -- scripts/common.sh@368 -- # return 0 00:18:18.931 14:11:15 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.931 14:11:15 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:18.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.931 --rc genhtml_branch_coverage=1 00:18:18.931 --rc genhtml_function_coverage=1 00:18:18.931 --rc genhtml_legend=1 00:18:18.931 --rc geninfo_all_blocks=1 00:18:18.931 --rc geninfo_unexecuted_blocks=1 00:18:18.931 00:18:18.931 ' 00:18:18.931 14:11:15 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:18.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.931 --rc genhtml_branch_coverage=1 00:18:18.931 --rc genhtml_function_coverage=1 00:18:18.931 --rc genhtml_legend=1 00:18:18.931 --rc geninfo_all_blocks=1 00:18:18.931 --rc geninfo_unexecuted_blocks=1 00:18:18.931 00:18:18.931 ' 00:18:18.931 14:11:15 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:18.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.931 --rc genhtml_branch_coverage=1 00:18:18.931 --rc genhtml_function_coverage=1 00:18:18.931 --rc genhtml_legend=1 00:18:18.931 --rc geninfo_all_blocks=1 00:18:18.931 --rc geninfo_unexecuted_blocks=1 00:18:18.931 00:18:18.931 ' 00:18:18.931 14:11:15 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:18.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.931 --rc genhtml_branch_coverage=1 00:18:18.931 --rc genhtml_function_coverage=1 00:18:18.931 --rc genhtml_legend=1 00:18:18.931 --rc geninfo_all_blocks=1 00:18:18.931 --rc geninfo_unexecuted_blocks=1 00:18:18.931 00:18:18.931 ' 00:18:18.931 14:11:15 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:18.931 14:11:15 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:18.931 14:11:15 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.931 14:11:16 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.931 14:11:16 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:18.931 14:11:16 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:18.931 14:11:16 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.931 14:11:16 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:18.931 14:11:16 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:18.931 14:11:16 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.931 14:11:16 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.931 14:11:16 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:18.931 14:11:16 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:18.931 14:11:16 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:18.931 14:11:16 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:18.931 14:11:16 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:18.931 14:11:16 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:18.931 14:11:16 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.931 14:11:16 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.931 14:11:16 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:18.931 14:11:16 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:18.931 14:11:16 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:18.931 14:11:16 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:18.931 14:11:16 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:18.931 14:11:16 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:18.931 14:11:16 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:18.931 14:11:16 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:18.931 14:11:16 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:18.931 14:11:16 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:18.931 14:11:16 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.931 14:11:16 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:18.931 14:11:16 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:18.931 14:11:16 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:18.931 14:11:16 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:18.931 14:11:16 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:18.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:18.931 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:18.931 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:18.931 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:18.931 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:18.931 14:11:16 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75018 00:18:18.931 14:11:16 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75018 00:18:18.931 14:11:16 ftl -- common/autotest_common.sh@835 -- # '[' -z 75018 ']' 00:18:18.931 14:11:16 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:18.931 14:11:16 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.931 14:11:16 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.931 14:11:16 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.931 14:11:16 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.931 14:11:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:18.931 [2024-12-09 14:11:16.480060] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:18.931 [2024-12-09 14:11:16.480150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75018 ] 00:18:18.931 [2024-12-09 14:11:16.635280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.931 [2024-12-09 14:11:16.730205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.931 14:11:17 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.931 14:11:17 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:18.931 14:11:17 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:18.931 14:11:17 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@50 -- # break 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:18.931 14:11:18 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:18.931 14:11:19 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:18.931 14:11:19 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:18.932 14:11:19 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:18.932 14:11:19 ftl -- ftl/ftl.sh@63 -- # break 00:18:18.932 14:11:19 ftl -- ftl/ftl.sh@66 -- # killprocess 75018 00:18:18.932 14:11:19 ftl -- common/autotest_common.sh@954 -- # '[' -z 75018 ']' 00:18:18.932 14:11:19 ftl -- common/autotest_common.sh@958 -- # kill -0 75018 00:18:18.932 14:11:19 ftl -- common/autotest_common.sh@959 -- # uname 00:18:18.932 14:11:19 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.932 14:11:19 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75018 00:18:18.932 14:11:19 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.932 14:11:19 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.932 14:11:19 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75018' 00:18:18.932 killing process with pid 75018 00:18:18.932 14:11:19 ftl -- common/autotest_common.sh@973 -- # kill 75018 00:18:18.932 14:11:19 ftl -- common/autotest_common.sh@978 -- # wait 75018 00:18:18.932 14:11:20 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:18.932 14:11:20 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:18.932 14:11:20 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:18.932 14:11:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.932 14:11:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:18.932 ************************************ 00:18:18.932 START TEST ftl_fio_basic 00:18:18.932 ************************************ 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:18.932 * Looking for test storage... 00:18:18.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:18.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.932 --rc genhtml_branch_coverage=1 00:18:18.932 --rc genhtml_function_coverage=1 00:18:18.932 --rc genhtml_legend=1 00:18:18.932 --rc geninfo_all_blocks=1 00:18:18.932 --rc geninfo_unexecuted_blocks=1 00:18:18.932 00:18:18.932 ' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:18.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.932 --rc genhtml_branch_coverage=1 00:18:18.932 --rc genhtml_function_coverage=1 00:18:18.932 --rc genhtml_legend=1 00:18:18.932 --rc geninfo_all_blocks=1 00:18:18.932 --rc geninfo_unexecuted_blocks=1 00:18:18.932 00:18:18.932 ' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:18.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.932 --rc genhtml_branch_coverage=1 00:18:18.932 --rc genhtml_function_coverage=1 00:18:18.932 --rc genhtml_legend=1 00:18:18.932 --rc geninfo_all_blocks=1 00:18:18.932 --rc geninfo_unexecuted_blocks=1 00:18:18.932 00:18:18.932 ' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:18.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.932 --rc genhtml_branch_coverage=1 00:18:18.932 --rc genhtml_function_coverage=1 00:18:18.932 --rc genhtml_legend=1 00:18:18.932 --rc geninfo_all_blocks=1 00:18:18.932 --rc geninfo_unexecuted_blocks=1 00:18:18.932 00:18:18.932 ' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75147 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75147 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75147 ']' 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.932 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.933 14:11:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:18.933 14:11:20 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:18.933 [2024-12-09 14:11:20.618067] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:18.933 [2024-12-09 14:11:20.618401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75147 ] 00:18:19.192 [2024-12-09 14:11:20.778144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:19.192 [2024-12-09 14:11:20.899308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.192 [2024-12-09 14:11:20.899639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.192 [2024-12-09 14:11:20.899616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.756 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.756 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:19.756 14:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:19.756 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:19.756 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:19.756 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:19.756 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:19.756 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:20.014 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:20.014 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:20.014 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:20.014 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:20.014 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:20.014 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:20.014 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:20.014 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:20.273 { 00:18:20.273 "name": "nvme0n1", 00:18:20.273 "aliases": [ 00:18:20.273 "81ab5e48-dd5f-4696-bf5e-f729af40a666" 00:18:20.273 ], 00:18:20.273 "product_name": "NVMe disk", 00:18:20.273 "block_size": 4096, 00:18:20.273 "num_blocks": 1310720, 00:18:20.273 "uuid": "81ab5e48-dd5f-4696-bf5e-f729af40a666", 00:18:20.273 "numa_id": -1, 00:18:20.273 "assigned_rate_limits": { 00:18:20.273 "rw_ios_per_sec": 0, 00:18:20.273 "rw_mbytes_per_sec": 0, 00:18:20.273 "r_mbytes_per_sec": 0, 00:18:20.273 "w_mbytes_per_sec": 0 00:18:20.273 }, 00:18:20.273 "claimed": false, 00:18:20.273 "zoned": false, 00:18:20.273 "supported_io_types": { 00:18:20.273 "read": true, 00:18:20.273 "write": true, 00:18:20.273 "unmap": true, 00:18:20.273 "flush": true, 00:18:20.273 "reset": true, 00:18:20.273 "nvme_admin": true, 00:18:20.273 "nvme_io": true, 00:18:20.273 "nvme_io_md": false, 00:18:20.273 "write_zeroes": true, 00:18:20.273 "zcopy": false, 00:18:20.273 "get_zone_info": false, 00:18:20.273 "zone_management": false, 00:18:20.273 "zone_append": false, 00:18:20.273 "compare": true, 00:18:20.273 "compare_and_write": false, 00:18:20.273 "abort": true, 00:18:20.273 "seek_hole": false, 00:18:20.273 "seek_data": false, 00:18:20.273 "copy": true, 00:18:20.273 "nvme_iov_md": false 00:18:20.273 }, 00:18:20.273 "driver_specific": { 00:18:20.273 "nvme": [ 00:18:20.273 { 00:18:20.273 "pci_address": "0000:00:11.0", 00:18:20.273 "trid": { 00:18:20.273 "trtype": "PCIe", 00:18:20.273 "traddr": "0000:00:11.0" 00:18:20.273 }, 00:18:20.273 "ctrlr_data": { 00:18:20.273 "cntlid": 0, 00:18:20.273 "vendor_id": "0x1b36", 00:18:20.273 "model_number": "QEMU NVMe Ctrl", 00:18:20.273 "serial_number": "12341", 00:18:20.273 "firmware_revision": "8.0.0", 00:18:20.273 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:20.273 "oacs": { 00:18:20.273 "security": 0, 00:18:20.273 "format": 1, 00:18:20.273 "firmware": 0, 00:18:20.273 "ns_manage": 1 00:18:20.273 }, 00:18:20.273 "multi_ctrlr": false, 00:18:20.273 "ana_reporting": false 00:18:20.273 }, 00:18:20.273 "vs": { 00:18:20.273 "nvme_version": "1.4" 00:18:20.273 }, 00:18:20.273 "ns_data": { 00:18:20.273 "id": 1, 00:18:20.273 "can_share": false 00:18:20.273 } 00:18:20.273 } 00:18:20.273 ], 00:18:20.273 "mp_policy": "active_passive" 00:18:20.273 } 00:18:20.273 } 00:18:20.273 ]' 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:20.273 14:11:21 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:20.530 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:20.530 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:20.788 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5011c2ae-f153-4a60-8df8-bedfaa95a747 00:18:20.788 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5011c2ae-f153-4a60-8df8-bedfaa95a747 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:21.047 { 00:18:21.047 "name": "4af160e1-f1b4-4962-bd28-eb29a15374f6", 00:18:21.047 "aliases": [ 00:18:21.047 "lvs/nvme0n1p0" 00:18:21.047 ], 00:18:21.047 "product_name": "Logical Volume", 00:18:21.047 "block_size": 4096, 00:18:21.047 "num_blocks": 26476544, 00:18:21.047 "uuid": "4af160e1-f1b4-4962-bd28-eb29a15374f6", 00:18:21.047 "assigned_rate_limits": { 00:18:21.047 "rw_ios_per_sec": 0, 00:18:21.047 "rw_mbytes_per_sec": 0, 00:18:21.047 "r_mbytes_per_sec": 0, 00:18:21.047 "w_mbytes_per_sec": 0 00:18:21.047 }, 00:18:21.047 "claimed": false, 00:18:21.047 "zoned": false, 00:18:21.047 "supported_io_types": { 00:18:21.047 "read": true, 00:18:21.047 "write": true, 00:18:21.047 "unmap": true, 00:18:21.047 "flush": false, 00:18:21.047 "reset": true, 00:18:21.047 "nvme_admin": false, 00:18:21.047 "nvme_io": false, 00:18:21.047 "nvme_io_md": false, 00:18:21.047 "write_zeroes": true, 00:18:21.047 "zcopy": false, 00:18:21.047 "get_zone_info": false, 00:18:21.047 "zone_management": false, 00:18:21.047 "zone_append": false, 00:18:21.047 "compare": false, 00:18:21.047 "compare_and_write": false, 00:18:21.047 "abort": false, 00:18:21.047 "seek_hole": true, 00:18:21.047 "seek_data": true, 00:18:21.047 "copy": false, 00:18:21.047 "nvme_iov_md": false 00:18:21.047 }, 00:18:21.047 "driver_specific": { 00:18:21.047 "lvol": { 00:18:21.047 "lvol_store_uuid": "5011c2ae-f153-4a60-8df8-bedfaa95a747", 00:18:21.047 "base_bdev": "nvme0n1", 00:18:21.047 "thin_provision": true, 00:18:21.047 "num_allocated_clusters": 0, 00:18:21.047 "snapshot": false, 00:18:21.047 "clone": false, 00:18:21.047 "esnap_clone": false 00:18:21.047 } 00:18:21.047 } 00:18:21.047 } 00:18:21.047 ]' 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:21.047 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:21.305 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:21.305 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:21.305 14:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:21.305 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:21.305 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:21.305 14:11:22 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:21.563 { 00:18:21.563 "name": "4af160e1-f1b4-4962-bd28-eb29a15374f6", 00:18:21.563 "aliases": [ 00:18:21.563 "lvs/nvme0n1p0" 00:18:21.563 ], 00:18:21.563 "product_name": "Logical Volume", 00:18:21.563 "block_size": 4096, 00:18:21.563 "num_blocks": 26476544, 00:18:21.563 "uuid": "4af160e1-f1b4-4962-bd28-eb29a15374f6", 00:18:21.563 "assigned_rate_limits": { 00:18:21.563 "rw_ios_per_sec": 0, 00:18:21.563 "rw_mbytes_per_sec": 0, 00:18:21.563 "r_mbytes_per_sec": 0, 00:18:21.563 "w_mbytes_per_sec": 0 00:18:21.563 }, 00:18:21.563 "claimed": false, 00:18:21.563 "zoned": false, 00:18:21.563 "supported_io_types": { 00:18:21.563 "read": true, 00:18:21.563 "write": true, 00:18:21.563 "unmap": true, 00:18:21.563 "flush": false, 00:18:21.563 "reset": true, 00:18:21.563 "nvme_admin": false, 00:18:21.563 "nvme_io": false, 00:18:21.563 "nvme_io_md": false, 00:18:21.563 "write_zeroes": true, 00:18:21.563 "zcopy": false, 00:18:21.563 "get_zone_info": false, 00:18:21.563 "zone_management": false, 00:18:21.563 "zone_append": false, 00:18:21.563 "compare": false, 00:18:21.563 "compare_and_write": false, 00:18:21.563 "abort": false, 00:18:21.563 "seek_hole": true, 00:18:21.563 "seek_data": true, 00:18:21.563 "copy": false, 00:18:21.563 "nvme_iov_md": false 00:18:21.563 }, 00:18:21.563 "driver_specific": { 00:18:21.563 "lvol": { 00:18:21.563 "lvol_store_uuid": "5011c2ae-f153-4a60-8df8-bedfaa95a747", 00:18:21.563 "base_bdev": "nvme0n1", 00:18:21.563 "thin_provision": true, 00:18:21.563 "num_allocated_clusters": 0, 00:18:21.563 "snapshot": false, 00:18:21.563 "clone": false, 00:18:21.563 "esnap_clone": false 00:18:21.563 } 00:18:21.563 } 00:18:21.563 } 00:18:21.563 ]' 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:21.563 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:21.822 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:21.822 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4af160e1-f1b4-4962-bd28-eb29a15374f6 00:18:22.080 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:22.080 { 00:18:22.080 "name": "4af160e1-f1b4-4962-bd28-eb29a15374f6", 00:18:22.080 "aliases": [ 00:18:22.080 "lvs/nvme0n1p0" 00:18:22.080 ], 00:18:22.080 "product_name": "Logical Volume", 00:18:22.080 "block_size": 4096, 00:18:22.080 "num_blocks": 26476544, 00:18:22.080 "uuid": "4af160e1-f1b4-4962-bd28-eb29a15374f6", 00:18:22.080 "assigned_rate_limits": { 00:18:22.080 "rw_ios_per_sec": 0, 00:18:22.080 "rw_mbytes_per_sec": 0, 00:18:22.080 "r_mbytes_per_sec": 0, 00:18:22.080 "w_mbytes_per_sec": 0 00:18:22.080 }, 00:18:22.080 "claimed": false, 00:18:22.080 "zoned": false, 00:18:22.080 "supported_io_types": { 00:18:22.080 "read": true, 00:18:22.080 "write": true, 00:18:22.080 "unmap": true, 00:18:22.080 "flush": false, 00:18:22.080 "reset": true, 00:18:22.080 "nvme_admin": false, 00:18:22.080 "nvme_io": false, 00:18:22.080 "nvme_io_md": false, 00:18:22.080 "write_zeroes": true, 00:18:22.080 "zcopy": false, 00:18:22.080 "get_zone_info": false, 00:18:22.080 "zone_management": false, 00:18:22.080 "zone_append": false, 00:18:22.080 "compare": false, 00:18:22.080 "compare_and_write": false, 00:18:22.080 "abort": false, 00:18:22.080 "seek_hole": true, 00:18:22.080 "seek_data": true, 00:18:22.080 "copy": false, 00:18:22.080 "nvme_iov_md": false 00:18:22.080 }, 00:18:22.080 "driver_specific": { 00:18:22.080 "lvol": { 00:18:22.080 "lvol_store_uuid": "5011c2ae-f153-4a60-8df8-bedfaa95a747", 00:18:22.080 "base_bdev": "nvme0n1", 00:18:22.080 "thin_provision": true, 00:18:22.080 "num_allocated_clusters": 0, 00:18:22.080 "snapshot": false, 00:18:22.080 "clone": false, 00:18:22.080 "esnap_clone": false 00:18:22.080 } 00:18:22.080 } 00:18:22.080 } 00:18:22.080 ]' 00:18:22.080 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:22.080 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:22.080 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:22.080 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:22.080 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:22.080 14:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:22.080 14:11:23 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:22.080 14:11:23 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:22.080 14:11:23 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4af160e1-f1b4-4962-bd28-eb29a15374f6 -c nvc0n1p0 --l2p_dram_limit 60 00:18:22.339 [2024-12-09 14:11:24.030084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.339 [2024-12-09 14:11:24.030258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.339 [2024-12-09 14:11:24.030278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:22.339 [2024-12-09 14:11:24.030286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.339 [2024-12-09 14:11:24.030341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.339 [2024-12-09 14:11:24.030351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.339 [2024-12-09 14:11:24.030360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:22.339 [2024-12-09 14:11:24.030367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.339 [2024-12-09 14:11:24.030397] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.339 [2024-12-09 14:11:24.031035] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.339 [2024-12-09 14:11:24.031051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.339 [2024-12-09 14:11:24.031058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.339 [2024-12-09 14:11:24.031066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:18:22.339 [2024-12-09 14:11:24.031072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.339 [2024-12-09 14:11:24.031122] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 49846e67-4d91-4c3b-a707-238ec219770f 00:18:22.339 [2024-12-09 14:11:24.032051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.339 [2024-12-09 14:11:24.032074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:22.340 [2024-12-09 14:11:24.032082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:22.340 [2024-12-09 14:11:24.032089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.340 [2024-12-09 14:11:24.036680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.340 [2024-12-09 14:11:24.036787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.340 [2024-12-09 14:11:24.036799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.546 ms 00:18:22.340 [2024-12-09 14:11:24.036807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.340 [2024-12-09 14:11:24.036890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.340 [2024-12-09 14:11:24.036899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.340 [2024-12-09 14:11:24.036906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:22.340 [2024-12-09 14:11:24.036916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.340 [2024-12-09 14:11:24.036962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.340 [2024-12-09 14:11:24.036972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.340 [2024-12-09 14:11:24.036978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:22.340 [2024-12-09 14:11:24.036984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.340 [2024-12-09 14:11:24.037006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.340 [2024-12-09 14:11:24.039841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.340 [2024-12-09 14:11:24.039930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.340 [2024-12-09 14:11:24.039945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.838 ms 00:18:22.340 [2024-12-09 14:11:24.039953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.340 [2024-12-09 14:11:24.039988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.340 [2024-12-09 14:11:24.039995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.340 [2024-12-09 14:11:24.040002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.340 [2024-12-09 14:11:24.040008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.340 [2024-12-09 14:11:24.040028] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:22.340 [2024-12-09 14:11:24.040144] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.340 [2024-12-09 14:11:24.040156] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.340 [2024-12-09 14:11:24.040164] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:22.340 [2024-12-09 14:11:24.040173] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040180] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040188] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:22.340 [2024-12-09 14:11:24.040194] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.340 [2024-12-09 14:11:24.040201] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.340 [2024-12-09 14:11:24.040206] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.340 [2024-12-09 14:11:24.040212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.340 [2024-12-09 14:11:24.040219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.340 [2024-12-09 14:11:24.040226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:18:22.340 [2024-12-09 14:11:24.040232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.340 [2024-12-09 14:11:24.040300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.340 [2024-12-09 14:11:24.040307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.340 [2024-12-09 14:11:24.040314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:22.340 [2024-12-09 14:11:24.040319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.340 [2024-12-09 14:11:24.040413] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.340 [2024-12-09 14:11:24.040420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.340 [2024-12-09 14:11:24.040429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.340 [2024-12-09 14:11:24.040447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.340 [2024-12-09 14:11:24.040467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.340 [2024-12-09 14:11:24.040478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.340 [2024-12-09 14:11:24.040483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:22.340 [2024-12-09 14:11:24.040490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.340 [2024-12-09 14:11:24.040495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.340 [2024-12-09 14:11:24.040502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:22.340 [2024-12-09 14:11:24.040507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.340 [2024-12-09 14:11:24.040520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.340 [2024-12-09 14:11:24.040552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.340 [2024-12-09 14:11:24.040570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.340 [2024-12-09 14:11:24.040588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.340 [2024-12-09 14:11:24.040607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.340 [2024-12-09 14:11:24.040628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.340 [2024-12-09 14:11:24.040649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.340 [2024-12-09 14:11:24.040654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:22.340 [2024-12-09 14:11:24.040660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.340 [2024-12-09 14:11:24.040665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.340 [2024-12-09 14:11:24.040671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:22.340 [2024-12-09 14:11:24.040676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.340 [2024-12-09 14:11:24.040688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:22.340 [2024-12-09 14:11:24.040694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040698] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.340 [2024-12-09 14:11:24.040706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.340 [2024-12-09 14:11:24.040712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.340 [2024-12-09 14:11:24.040725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.340 [2024-12-09 14:11:24.040733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.340 [2024-12-09 14:11:24.040738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.340 [2024-12-09 14:11:24.040745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.340 [2024-12-09 14:11:24.040750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.340 [2024-12-09 14:11:24.040756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.340 [2024-12-09 14:11:24.040762] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.340 [2024-12-09 14:11:24.040770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.340 [2024-12-09 14:11:24.040777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:22.340 [2024-12-09 14:11:24.040784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:22.340 [2024-12-09 14:11:24.040789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:22.340 [2024-12-09 14:11:24.040796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:22.340 [2024-12-09 14:11:24.040803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:22.340 [2024-12-09 14:11:24.040810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:22.341 [2024-12-09 14:11:24.040815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:22.341 [2024-12-09 14:11:24.040822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:22.341 [2024-12-09 14:11:24.040827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:22.341 [2024-12-09 14:11:24.040835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:22.341 [2024-12-09 14:11:24.040840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:22.341 [2024-12-09 14:11:24.040847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:22.341 [2024-12-09 14:11:24.040852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:22.341 [2024-12-09 14:11:24.040859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:22.341 [2024-12-09 14:11:24.040864] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.341 [2024-12-09 14:11:24.040871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.341 [2024-12-09 14:11:24.040879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.341 [2024-12-09 14:11:24.040886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.341 [2024-12-09 14:11:24.040891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.341 [2024-12-09 14:11:24.040898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.341 [2024-12-09 14:11:24.040904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.341 [2024-12-09 14:11:24.040911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.341 [2024-12-09 14:11:24.040917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:18:22.341 [2024-12-09 14:11:24.040923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.341 [2024-12-09 14:11:24.040976] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:22.341 [2024-12-09 14:11:24.040993] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:24.867 [2024-12-09 14:11:26.055738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.055806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:24.867 [2024-12-09 14:11:26.055822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2014.751 ms 00:18:24.867 [2024-12-09 14:11:26.055832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.081258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.081634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.867 [2024-12-09 14:11:26.081659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.214 ms 00:18:24.867 [2024-12-09 14:11:26.081669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.081806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.081818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:24.867 [2024-12-09 14:11:26.081827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:24.867 [2024-12-09 14:11:26.081838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.130558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.130671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:24.867 [2024-12-09 14:11:26.130729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.669 ms 00:18:24.867 [2024-12-09 14:11:26.130775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.130843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.130885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:24.867 [2024-12-09 14:11:26.130925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:24.867 [2024-12-09 14:11:26.130961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.131351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.131435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:24.867 [2024-12-09 14:11:26.131485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:24.867 [2024-12-09 14:11:26.131530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.131705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.131755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:24.867 [2024-12-09 14:11:26.131797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:24.867 [2024-12-09 14:11:26.131843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.146202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.146292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:24.867 [2024-12-09 14:11:26.146337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.303 ms 00:18:24.867 [2024-12-09 14:11:26.146379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.157788] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:24.867 [2024-12-09 14:11:26.172192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.172276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:24.867 [2024-12-09 14:11:26.172328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.688 ms 00:18:24.867 [2024-12-09 14:11:26.172374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.230777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.230923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:24.867 [2024-12-09 14:11:26.230986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.336 ms 00:18:24.867 [2024-12-09 14:11:26.231030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.231503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.231607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:24.867 [2024-12-09 14:11:26.231670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:18:24.867 [2024-12-09 14:11:26.231719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.255347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.255438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:24.867 [2024-12-09 14:11:26.255485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.515 ms 00:18:24.867 [2024-12-09 14:11:26.255529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.278125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.278222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:24.867 [2024-12-09 14:11:26.278272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.472 ms 00:18:24.867 [2024-12-09 14:11:26.278313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.279175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.279279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:24.867 [2024-12-09 14:11:26.279335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:18:24.867 [2024-12-09 14:11:26.279380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.359201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.359236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:24.867 [2024-12-09 14:11:26.359253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.730 ms 00:18:24.867 [2024-12-09 14:11:26.359263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.383761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.383801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:24.867 [2024-12-09 14:11:26.383815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.412 ms 00:18:24.867 [2024-12-09 14:11:26.383822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.406978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.407005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:24.867 [2024-12-09 14:11:26.407017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.112 ms 00:18:24.867 [2024-12-09 14:11:26.407024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.430585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.430618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:24.867 [2024-12-09 14:11:26.430631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.516 ms 00:18:24.867 [2024-12-09 14:11:26.430638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.430692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.430702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:24.867 [2024-12-09 14:11:26.430717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:24.867 [2024-12-09 14:11:26.430724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.867 [2024-12-09 14:11:26.430808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.867 [2024-12-09 14:11:26.430817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:24.867 [2024-12-09 14:11:26.430827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:24.868 [2024-12-09 14:11:26.430835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.868 [2024-12-09 14:11:26.431851] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2401.316 ms, result 0 00:18:24.868 { 00:18:24.868 "name": "ftl0", 00:18:24.868 "uuid": "49846e67-4d91-4c3b-a707-238ec219770f" 00:18:24.868 } 00:18:24.868 14:11:26 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:24.868 14:11:26 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:24.868 14:11:26 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:24.868 14:11:26 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:24.868 14:11:26 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:24.868 14:11:26 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:24.868 14:11:26 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:24.868 14:11:26 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:25.126 [ 00:18:25.126 { 00:18:25.126 "name": "ftl0", 00:18:25.126 "aliases": [ 00:18:25.126 "49846e67-4d91-4c3b-a707-238ec219770f" 00:18:25.126 ], 00:18:25.126 "product_name": "FTL disk", 00:18:25.126 "block_size": 4096, 00:18:25.126 "num_blocks": 20971520, 00:18:25.126 "uuid": "49846e67-4d91-4c3b-a707-238ec219770f", 00:18:25.126 "assigned_rate_limits": { 00:18:25.126 "rw_ios_per_sec": 0, 00:18:25.126 "rw_mbytes_per_sec": 0, 00:18:25.126 "r_mbytes_per_sec": 0, 00:18:25.126 "w_mbytes_per_sec": 0 00:18:25.126 }, 00:18:25.126 "claimed": false, 00:18:25.126 "zoned": false, 00:18:25.126 "supported_io_types": { 00:18:25.126 "read": true, 00:18:25.126 "write": true, 00:18:25.126 "unmap": true, 00:18:25.126 "flush": true, 00:18:25.126 "reset": false, 00:18:25.126 "nvme_admin": false, 00:18:25.126 "nvme_io": false, 00:18:25.126 "nvme_io_md": false, 00:18:25.126 "write_zeroes": true, 00:18:25.126 "zcopy": false, 00:18:25.126 "get_zone_info": false, 00:18:25.126 "zone_management": false, 00:18:25.126 "zone_append": false, 00:18:25.126 "compare": false, 00:18:25.126 "compare_and_write": false, 00:18:25.126 "abort": false, 00:18:25.126 "seek_hole": false, 00:18:25.126 "seek_data": false, 00:18:25.126 "copy": false, 00:18:25.126 "nvme_iov_md": false 00:18:25.126 }, 00:18:25.126 "driver_specific": { 00:18:25.126 "ftl": { 00:18:25.126 "base_bdev": "4af160e1-f1b4-4962-bd28-eb29a15374f6", 00:18:25.126 "cache": "nvc0n1p0" 00:18:25.126 } 00:18:25.126 } 00:18:25.126 } 00:18:25.126 ] 00:18:25.383 14:11:26 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:25.383 14:11:26 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:25.383 14:11:26 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:25.383 14:11:27 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:25.383 14:11:27 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:25.642 [2024-12-09 14:11:27.273084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.273129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:25.642 [2024-12-09 14:11:27.273142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:25.642 [2024-12-09 14:11:27.273152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.273187] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:25.642 [2024-12-09 14:11:27.275817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.275846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:25.642 [2024-12-09 14:11:27.275858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.609 ms 00:18:25.642 [2024-12-09 14:11:27.275867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.276335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.276349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:25.642 [2024-12-09 14:11:27.276360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:18:25.642 [2024-12-09 14:11:27.276367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.279610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.279631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:25.642 [2024-12-09 14:11:27.279643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.222 ms 00:18:25.642 [2024-12-09 14:11:27.279652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.285896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.285918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:25.642 [2024-12-09 14:11:27.285928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.214 ms 00:18:25.642 [2024-12-09 14:11:27.285935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.309156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.309184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:25.642 [2024-12-09 14:11:27.309209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.129 ms 00:18:25.642 [2024-12-09 14:11:27.309217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.323453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.323481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:25.642 [2024-12-09 14:11:27.323496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.189 ms 00:18:25.642 [2024-12-09 14:11:27.323504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.323715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.323727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:25.642 [2024-12-09 14:11:27.323736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:18:25.642 [2024-12-09 14:11:27.323743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.346244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.346271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:25.642 [2024-12-09 14:11:27.346283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.472 ms 00:18:25.642 [2024-12-09 14:11:27.346290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.368892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.368918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:25.642 [2024-12-09 14:11:27.368930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.556 ms 00:18:25.642 [2024-12-09 14:11:27.368937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.390921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.390948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:25.642 [2024-12-09 14:11:27.390959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.939 ms 00:18:25.642 [2024-12-09 14:11:27.390966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.412823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.642 [2024-12-09 14:11:27.412849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:25.642 [2024-12-09 14:11:27.412860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.764 ms 00:18:25.642 [2024-12-09 14:11:27.412867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.642 [2024-12-09 14:11:27.412914] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:25.642 [2024-12-09 14:11:27.412928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.412939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.412948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.412957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.412965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.412973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.412981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.412992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:25.642 [2024-12-09 14:11:27.413251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:25.643 [2024-12-09 14:11:27.413806] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:25.643 [2024-12-09 14:11:27.413816] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49846e67-4d91-4c3b-a707-238ec219770f 00:18:25.643 [2024-12-09 14:11:27.413823] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:25.643 [2024-12-09 14:11:27.413833] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:25.643 [2024-12-09 14:11:27.413840] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:25.643 [2024-12-09 14:11:27.413851] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:25.643 [2024-12-09 14:11:27.413858] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:25.643 [2024-12-09 14:11:27.413867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:25.643 [2024-12-09 14:11:27.413874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:25.643 [2024-12-09 14:11:27.413881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:25.643 [2024-12-09 14:11:27.413888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:25.643 [2024-12-09 14:11:27.413896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.643 [2024-12-09 14:11:27.413904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:25.643 [2024-12-09 14:11:27.413913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:18:25.643 [2024-12-09 14:11:27.413920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.643 [2024-12-09 14:11:27.423934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.643 [2024-12-09 14:11:27.423957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:25.643 [2024-12-09 14:11:27.423967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.972 ms 00:18:25.643 [2024-12-09 14:11:27.423973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.643 [2024-12-09 14:11:27.424254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.643 [2024-12-09 14:11:27.424261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:25.643 [2024-12-09 14:11:27.424268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:25.643 [2024-12-09 14:11:27.424274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.902 [2024-12-09 14:11:27.459872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.902 [2024-12-09 14:11:27.459900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:25.902 [2024-12-09 14:11:27.459910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.459917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.459978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.459985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:25.903 [2024-12-09 14:11:27.459993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.459998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.460087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.460100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:25.903 [2024-12-09 14:11:27.460108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.460113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.460136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.460143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:25.903 [2024-12-09 14:11:27.460150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.460155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.523800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.523834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:25.903 [2024-12-09 14:11:27.523844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.523850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.572155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.572179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:25.903 [2024-12-09 14:11:27.572189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.572195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.572257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.572264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:25.903 [2024-12-09 14:11:27.572274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.572280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.572354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.572361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:25.903 [2024-12-09 14:11:27.572369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.572374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.572456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.572464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:25.903 [2024-12-09 14:11:27.572471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.572478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.572524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.572532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:25.903 [2024-12-09 14:11:27.572550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.572556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.572598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.572605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:25.903 [2024-12-09 14:11:27.572612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.572620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.572666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.903 [2024-12-09 14:11:27.572674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:25.903 [2024-12-09 14:11:27.572682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.903 [2024-12-09 14:11:27.572687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.903 [2024-12-09 14:11:27.572827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 299.725 ms, result 0 00:18:25.903 true 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75147 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75147 ']' 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75147 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75147 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75147' 00:18:25.903 killing process with pid 75147 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75147 00:18:25.903 14:11:27 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75147 00:18:38.101 14:11:37 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:38.101 14:11:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:38.101 14:11:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:38.101 14:11:37 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.101 14:11:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:38.101 14:11:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:38.101 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:38.101 fio-3.35 00:18:38.101 Starting 1 thread 00:18:41.389 00:18:41.389 test: (groupid=0, jobs=1): err= 0: pid=75321: Mon Dec 9 14:11:42 2024 00:18:41.389 read: IOPS=1122, BW=74.5MiB/s (78.2MB/s)(255MiB/3415msec) 00:18:41.389 slat (nsec): min=3060, max=24615, avg=4476.95, stdev=2074.70 00:18:41.389 clat (usec): min=241, max=1204, avg=406.14, stdev=164.75 00:18:41.389 lat (usec): min=245, max=1209, avg=410.62, stdev=165.36 00:18:41.389 clat percentiles (usec): 00:18:41.389 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 314], 00:18:41.389 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:18:41.389 | 70.00th=[ 416], 80.00th=[ 515], 90.00th=[ 586], 95.00th=[ 865], 00:18:41.389 | 99.00th=[ 963], 99.50th=[ 1004], 99.90th=[ 1139], 99.95th=[ 1139], 00:18:41.389 | 99.99th=[ 1205] 00:18:41.389 write: IOPS=1130, BW=75.0MiB/s (78.7MB/s)(256MiB/3412msec); 0 zone resets 00:18:41.389 slat (nsec): min=13623, max=53438, avg=18580.07, stdev=3500.45 00:18:41.389 clat (usec): min=251, max=2928, avg=445.77, stdev=193.61 00:18:41.389 lat (usec): min=270, max=2944, avg=464.35, stdev=194.53 00:18:41.389 clat percentiles (usec): 00:18:41.389 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 343], 00:18:41.389 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:18:41.389 | 70.00th=[ 412], 80.00th=[ 562], 90.00th=[ 693], 95.00th=[ 938], 00:18:41.389 | 99.00th=[ 1045], 99.50th=[ 1074], 99.90th=[ 1319], 99.95th=[ 1598], 00:18:41.389 | 99.99th=[ 2933] 00:18:41.389 bw ( KiB/s): min=53040, max=91800, per=97.75%, avg=75117.33, stdev=17216.69, samples=6 00:18:41.389 iops : min= 780, max= 1350, avg=1104.67, stdev=253.19, samples=6 00:18:41.389 lat (usec) : 250=0.04%, 500=76.79%, 750=15.07%, 1000=6.76% 00:18:41.389 lat (msec) : 2=1.33%, 4=0.01% 00:18:41.389 cpu : usr=99.21%, sys=0.12%, ctx=7, majf=0, minf=1169 00:18:41.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:41.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.389 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:41.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:41.389 00:18:41.389 Run status group 0 (all jobs): 00:18:41.389 READ: bw=74.5MiB/s (78.2MB/s), 74.5MiB/s-74.5MiB/s (78.2MB/s-78.2MB/s), io=255MiB (267MB), run=3415-3415msec 00:18:41.389 WRITE: bw=75.0MiB/s (78.7MB/s), 75.0MiB/s-75.0MiB/s (78.7MB/s-78.7MB/s), io=256MiB (269MB), run=3412-3412msec 00:18:42.771 ----------------------------------------------------- 00:18:42.771 Suppressions used: 00:18:42.771 count bytes template 00:18:42.771 1 5 /usr/src/fio/parse.c 00:18:42.771 1 8 libtcmalloc_minimal.so 00:18:42.771 1 904 libcrypto.so 00:18:42.771 ----------------------------------------------------- 00:18:42.771 00:18:42.771 14:11:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:42.771 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.771 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:43.032 14:11:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:43.032 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:43.032 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:43.032 fio-3.35 00:18:43.032 Starting 2 threads 00:19:09.586 00:19:09.586 first_half: (groupid=0, jobs=1): err= 0: pid=75424: Mon Dec 9 14:12:09 2024 00:19:09.586 read: IOPS=2755, BW=10.8MiB/s (11.3MB/s)(255MiB/23658msec) 00:19:09.586 slat (nsec): min=3090, max=30390, avg=3892.46, stdev=816.76 00:19:09.586 clat (usec): min=524, max=467219, avg=32534.81, stdev=18040.60 00:19:09.586 lat (usec): min=528, max=467224, avg=32538.71, stdev=18040.67 00:19:09.586 clat percentiles (msec): 00:19:09.586 | 1.00th=[ 3], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 30], 00:19:09.586 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:19:09.586 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 40], 00:19:09.586 | 99.00th=[ 112], 99.50th=[ 150], 99.90th=[ 338], 99.95th=[ 405], 00:19:09.586 | 99.99th=[ 456] 00:19:09.586 write: IOPS=3980, BW=15.5MiB/s (16.3MB/s)(256MiB/16464msec); 0 zone resets 00:19:09.586 slat (usec): min=3, max=1599, avg= 5.88, stdev=14.88 00:19:09.586 clat (usec): min=371, max=78966, avg=13802.43, stdev=21119.28 00:19:09.586 lat (usec): min=380, max=78971, avg=13808.30, stdev=21119.41 00:19:09.586 clat percentiles (usec): 00:19:09.586 | 1.00th=[ 611], 5.00th=[ 693], 10.00th=[ 758], 20.00th=[ 889], 00:19:09.586 | 30.00th=[ 1020], 40.00th=[ 1188], 50.00th=[ 2311], 60.00th=[ 4883], 00:19:09.586 | 70.00th=[11600], 80.00th=[20841], 90.00th=[56886], 95.00th=[62129], 00:19:09.586 | 99.00th=[68682], 99.50th=[70779], 99.90th=[73925], 99.95th=[76022], 00:19:09.586 | 99.99th=[78119] 00:19:09.586 bw ( KiB/s): min= 312, max=50224, per=71.58%, avg=22795.13, stdev=13680.77, samples=23 00:19:09.586 iops : min= 78, max=12556, avg=5698.78, stdev=3420.19, samples=23 00:19:09.586 lat (usec) : 500=0.02%, 750=4.83%, 1000=9.41% 00:19:09.586 lat (msec) : 2=10.66%, 4=4.08%, 10=6.54%, 20=5.71%, 50=50.03% 00:19:09.586 lat (msec) : 100=8.18%, 250=0.48%, 500=0.06% 00:19:09.586 cpu : usr=99.35%, sys=0.19%, ctx=46, majf=0, minf=5547 00:19:09.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:09.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.586 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.586 issued rwts: total=65190,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.586 second_half: (groupid=0, jobs=1): err= 0: pid=75425: Mon Dec 9 14:12:09 2024 00:19:09.586 read: IOPS=2759, BW=10.8MiB/s (11.3MB/s)(254MiB/23611msec) 00:19:09.586 slat (nsec): min=3088, max=24934, avg=3874.81, stdev=875.57 00:19:09.586 clat (usec): min=501, max=329727, avg=32511.43, stdev=13162.03 00:19:09.586 lat (usec): min=505, max=329732, avg=32515.31, stdev=13162.10 00:19:09.586 clat percentiles (msec): 00:19:09.586 | 1.00th=[ 3], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 30], 00:19:09.586 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:19:09.586 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 41], 00:19:09.586 | 99.00th=[ 100], 99.50th=[ 129], 99.90th=[ 159], 99.95th=[ 222], 00:19:09.586 | 99.99th=[ 317] 00:19:09.586 write: IOPS=4350, BW=17.0MiB/s (17.8MB/s)(256MiB/15065msec); 0 zone resets 00:19:09.586 slat (usec): min=3, max=3806, avg= 5.69, stdev=18.71 00:19:09.586 clat (usec): min=381, max=97543, avg=13779.37, stdev=21177.87 00:19:09.586 lat (usec): min=387, max=97548, avg=13785.05, stdev=21177.96 00:19:09.586 clat percentiles (usec): 00:19:09.586 | 1.00th=[ 619], 5.00th=[ 693], 10.00th=[ 758], 20.00th=[ 898], 00:19:09.586 | 30.00th=[ 1037], 40.00th=[ 1237], 50.00th=[ 2802], 60.00th=[ 4752], 00:19:09.586 | 70.00th=[11207], 80.00th=[20055], 90.00th=[57410], 95.00th=[62653], 00:19:09.586 | 99.00th=[69731], 99.50th=[71828], 99.90th=[76022], 99.95th=[82314], 00:19:09.586 | 99.99th=[94897] 00:19:09.586 bw ( KiB/s): min= 800, max=49328, per=71.58%, avg=22795.13, stdev=15000.50, samples=23 00:19:09.586 iops : min= 200, max=12332, avg=5698.78, stdev=3750.12, samples=23 00:19:09.586 lat (usec) : 500=0.01%, 750=4.67%, 1000=9.19% 00:19:09.586 lat (msec) : 2=9.85%, 4=5.21%, 10=6.37%, 20=5.91%, 50=49.79% 00:19:09.586 lat (msec) : 100=8.54%, 250=0.44%, 500=0.02% 00:19:09.586 cpu : usr=99.23%, sys=0.15%, ctx=43, majf=0, minf=5560 00:19:09.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:09.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.586 issued rwts: total=65147,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.586 00:19:09.586 Run status group 0 (all jobs): 00:19:09.586 READ: bw=21.5MiB/s (22.6MB/s), 10.8MiB/s-10.8MiB/s (11.3MB/s-11.3MB/s), io=509MiB (534MB), run=23611-23658msec 00:19:09.586 WRITE: bw=31.1MiB/s (32.6MB/s), 15.5MiB/s-17.0MiB/s (16.3MB/s-17.8MB/s), io=512MiB (537MB), run=15065-16464msec 00:19:09.586 ----------------------------------------------------- 00:19:09.586 Suppressions used: 00:19:09.586 count bytes template 00:19:09.586 2 10 /usr/src/fio/parse.c 00:19:09.586 1 96 /usr/src/fio/iolog.c 00:19:09.586 1 8 libtcmalloc_minimal.so 00:19:09.586 1 904 libcrypto.so 00:19:09.586 ----------------------------------------------------- 00:19:09.586 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:09.847 14:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:10.108 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:10.108 fio-3.35 00:19:10.108 Starting 1 thread 00:19:25.026 00:19:25.026 test: (groupid=0, jobs=1): err= 0: pid=75735: Mon Dec 9 14:12:25 2024 00:19:25.026 read: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(255MiB/8007msec) 00:19:25.026 slat (nsec): min=3101, max=22044, avg=3543.91, stdev=677.32 00:19:25.026 clat (usec): min=488, max=34990, avg=15710.95, stdev=1589.76 00:19:25.026 lat (usec): min=492, max=34995, avg=15714.49, stdev=1589.77 00:19:25.026 clat percentiles (usec): 00:19:25.026 | 1.00th=[14484], 5.00th=[14615], 10.00th=[14746], 20.00th=[14877], 00:19:25.026 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15401], 60.00th=[15533], 00:19:25.026 | 70.00th=[15664], 80.00th=[15795], 90.00th=[17171], 95.00th=[19006], 00:19:25.026 | 99.00th=[22676], 99.50th=[23462], 99.90th=[24773], 99.95th=[26870], 00:19:25.026 | 99.99th=[33424] 00:19:25.026 write: IOPS=14.2k, BW=55.3MiB/s (58.0MB/s)(256MiB/4629msec); 0 zone resets 00:19:25.026 slat (usec): min=4, max=1363, avg= 5.58, stdev= 6.92 00:19:25.026 clat (usec): min=433, max=46086, avg=8998.75, stdev=9991.88 00:19:25.026 lat (usec): min=437, max=46091, avg=9004.33, stdev=9992.02 00:19:25.026 clat percentiles (usec): 00:19:25.026 | 1.00th=[ 619], 5.00th=[ 693], 10.00th=[ 766], 20.00th=[ 922], 00:19:25.026 | 30.00th=[ 1057], 40.00th=[ 1450], 50.00th=[ 4686], 60.00th=[ 6849], 00:19:25.026 | 70.00th=[12387], 80.00th=[15926], 90.00th=[28181], 95.00th=[29754], 00:19:25.026 | 99.00th=[32375], 99.50th=[33817], 99.90th=[37487], 99.95th=[39060], 00:19:25.026 | 99.99th=[44827] 00:19:25.026 bw ( KiB/s): min=13808, max=93880, per=92.58%, avg=52428.00, stdev=21337.93, samples=10 00:19:25.026 iops : min= 3452, max=23470, avg=13107.20, stdev=5334.56, samples=10 00:19:25.026 lat (usec) : 500=0.01%, 750=4.53%, 1000=8.40% 00:19:25.026 lat (msec) : 2=7.71%, 4=1.73%, 10=9.80%, 20=57.81%, 50=10.02% 00:19:25.026 cpu : usr=99.20%, sys=0.12%, ctx=20, majf=0, minf=5565 00:19:25.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:25.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.026 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.026 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.026 00:19:25.026 Run status group 0 (all jobs): 00:19:25.026 READ: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=255MiB (267MB), run=8007-8007msec 00:19:25.026 WRITE: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=256MiB (268MB), run=4629-4629msec 00:19:25.600 ----------------------------------------------------- 00:19:25.600 Suppressions used: 00:19:25.600 count bytes template 00:19:25.600 1 5 /usr/src/fio/parse.c 00:19:25.600 2 192 /usr/src/fio/iolog.c 00:19:25.600 1 8 libtcmalloc_minimal.so 00:19:25.600 1 904 libcrypto.so 00:19:25.600 ----------------------------------------------------- 00:19:25.600 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:25.600 Remove shared memory files 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57118 /dev/shm/spdk_tgt_trace.pid74057 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:25.600 00:19:25.600 real 1m6.871s 00:19:25.600 user 2m8.010s 00:19:25.600 sys 0m24.459s 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.600 ************************************ 00:19:25.600 END TEST ftl_fio_basic 00:19:25.600 14:12:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:25.600 ************************************ 00:19:25.600 14:12:27 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:25.600 14:12:27 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:25.600 14:12:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.600 14:12:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:25.600 ************************************ 00:19:25.600 START TEST ftl_bdevperf 00:19:25.600 ************************************ 00:19:25.600 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:25.600 * Looking for test storage... 00:19:25.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.600 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:25.600 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:25.600 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:25.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.863 --rc genhtml_branch_coverage=1 00:19:25.863 --rc genhtml_function_coverage=1 00:19:25.863 --rc genhtml_legend=1 00:19:25.863 --rc geninfo_all_blocks=1 00:19:25.863 --rc geninfo_unexecuted_blocks=1 00:19:25.863 00:19:25.863 ' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:25.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.863 --rc genhtml_branch_coverage=1 00:19:25.863 --rc genhtml_function_coverage=1 00:19:25.863 --rc genhtml_legend=1 00:19:25.863 --rc geninfo_all_blocks=1 00:19:25.863 --rc geninfo_unexecuted_blocks=1 00:19:25.863 00:19:25.863 ' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:25.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.863 --rc genhtml_branch_coverage=1 00:19:25.863 --rc genhtml_function_coverage=1 00:19:25.863 --rc genhtml_legend=1 00:19:25.863 --rc geninfo_all_blocks=1 00:19:25.863 --rc geninfo_unexecuted_blocks=1 00:19:25.863 00:19:25.863 ' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:25.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.863 --rc genhtml_branch_coverage=1 00:19:25.863 --rc genhtml_function_coverage=1 00:19:25.863 --rc genhtml_legend=1 00:19:25.863 --rc geninfo_all_blocks=1 00:19:25.863 --rc geninfo_unexecuted_blocks=1 00:19:25.863 00:19:25.863 ' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:25.863 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75964 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75964 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75964 ']' 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.864 14:12:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:25.864 [2024-12-09 14:12:27.558394] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:19:25.864 [2024-12-09 14:12:27.558557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75964 ] 00:19:26.125 [2024-12-09 14:12:27.716566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.125 [2024-12-09 14:12:27.850098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.699 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.699 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:26.699 14:12:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:26.699 14:12:28 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:26.699 14:12:28 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:26.699 14:12:28 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:26.699 14:12:28 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:26.699 14:12:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:26.961 14:12:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:26.961 14:12:28 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:26.961 14:12:28 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:26.961 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:26.961 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:26.961 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:26.961 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:26.961 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:27.224 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:27.224 { 00:19:27.224 "name": "nvme0n1", 00:19:27.224 "aliases": [ 00:19:27.224 "c0792930-e756-4770-8c09-bffb96742ae5" 00:19:27.224 ], 00:19:27.224 "product_name": "NVMe disk", 00:19:27.224 "block_size": 4096, 00:19:27.224 "num_blocks": 1310720, 00:19:27.224 "uuid": "c0792930-e756-4770-8c09-bffb96742ae5", 00:19:27.224 "numa_id": -1, 00:19:27.224 "assigned_rate_limits": { 00:19:27.224 "rw_ios_per_sec": 0, 00:19:27.224 "rw_mbytes_per_sec": 0, 00:19:27.224 "r_mbytes_per_sec": 0, 00:19:27.224 "w_mbytes_per_sec": 0 00:19:27.224 }, 00:19:27.224 "claimed": true, 00:19:27.224 "claim_type": "read_many_write_one", 00:19:27.224 "zoned": false, 00:19:27.224 "supported_io_types": { 00:19:27.224 "read": true, 00:19:27.224 "write": true, 00:19:27.224 "unmap": true, 00:19:27.224 "flush": true, 00:19:27.224 "reset": true, 00:19:27.224 "nvme_admin": true, 00:19:27.224 "nvme_io": true, 00:19:27.224 "nvme_io_md": false, 00:19:27.224 "write_zeroes": true, 00:19:27.224 "zcopy": false, 00:19:27.224 "get_zone_info": false, 00:19:27.224 "zone_management": false, 00:19:27.224 "zone_append": false, 00:19:27.224 "compare": true, 00:19:27.224 "compare_and_write": false, 00:19:27.224 "abort": true, 00:19:27.224 "seek_hole": false, 00:19:27.224 "seek_data": false, 00:19:27.224 "copy": true, 00:19:27.224 "nvme_iov_md": false 00:19:27.224 }, 00:19:27.224 "driver_specific": { 00:19:27.224 "nvme": [ 00:19:27.224 { 00:19:27.224 "pci_address": "0000:00:11.0", 00:19:27.224 "trid": { 00:19:27.224 "trtype": "PCIe", 00:19:27.224 "traddr": "0000:00:11.0" 00:19:27.224 }, 00:19:27.224 "ctrlr_data": { 00:19:27.224 "cntlid": 0, 00:19:27.224 "vendor_id": "0x1b36", 00:19:27.224 "model_number": "QEMU NVMe Ctrl", 00:19:27.224 "serial_number": "12341", 00:19:27.224 "firmware_revision": "8.0.0", 00:19:27.224 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:27.224 "oacs": { 00:19:27.224 "security": 0, 00:19:27.224 "format": 1, 00:19:27.224 "firmware": 0, 00:19:27.224 "ns_manage": 1 00:19:27.224 }, 00:19:27.224 "multi_ctrlr": false, 00:19:27.224 "ana_reporting": false 00:19:27.224 }, 00:19:27.224 "vs": { 00:19:27.224 "nvme_version": "1.4" 00:19:27.224 }, 00:19:27.224 "ns_data": { 00:19:27.224 "id": 1, 00:19:27.224 "can_share": false 00:19:27.224 } 00:19:27.224 } 00:19:27.224 ], 00:19:27.224 "mp_policy": "active_passive" 00:19:27.224 } 00:19:27.224 } 00:19:27.224 ]' 00:19:27.224 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:27.224 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:27.224 14:12:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:27.224 14:12:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:27.224 14:12:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:27.224 14:12:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:27.224 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:27.224 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:27.224 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:27.487 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:27.487 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:27.487 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5011c2ae-f153-4a60-8df8-bedfaa95a747 00:19:27.487 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:27.487 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5011c2ae-f153-4a60-8df8-bedfaa95a747 00:19:27.748 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:28.009 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=513fc7ba-9068-4f8c-af2e-862e87a1ba2d 00:19:28.009 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 513fc7ba-9068-4f8c-af2e-862e87a1ba2d 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:28.271 14:12:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:28.533 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:28.533 { 00:19:28.533 "name": "925e0e6f-26ff-4b37-862e-e561eeec11c9", 00:19:28.533 "aliases": [ 00:19:28.533 "lvs/nvme0n1p0" 00:19:28.533 ], 00:19:28.533 "product_name": "Logical Volume", 00:19:28.533 "block_size": 4096, 00:19:28.533 "num_blocks": 26476544, 00:19:28.533 "uuid": "925e0e6f-26ff-4b37-862e-e561eeec11c9", 00:19:28.533 "assigned_rate_limits": { 00:19:28.533 "rw_ios_per_sec": 0, 00:19:28.533 "rw_mbytes_per_sec": 0, 00:19:28.533 "r_mbytes_per_sec": 0, 00:19:28.533 "w_mbytes_per_sec": 0 00:19:28.533 }, 00:19:28.533 "claimed": false, 00:19:28.533 "zoned": false, 00:19:28.533 "supported_io_types": { 00:19:28.533 "read": true, 00:19:28.533 "write": true, 00:19:28.533 "unmap": true, 00:19:28.533 "flush": false, 00:19:28.533 "reset": true, 00:19:28.533 "nvme_admin": false, 00:19:28.534 "nvme_io": false, 00:19:28.534 "nvme_io_md": false, 00:19:28.534 "write_zeroes": true, 00:19:28.534 "zcopy": false, 00:19:28.534 "get_zone_info": false, 00:19:28.534 "zone_management": false, 00:19:28.534 "zone_append": false, 00:19:28.534 "compare": false, 00:19:28.534 "compare_and_write": false, 00:19:28.534 "abort": false, 00:19:28.534 "seek_hole": true, 00:19:28.534 "seek_data": true, 00:19:28.534 "copy": false, 00:19:28.534 "nvme_iov_md": false 00:19:28.534 }, 00:19:28.534 "driver_specific": { 00:19:28.534 "lvol": { 00:19:28.534 "lvol_store_uuid": "513fc7ba-9068-4f8c-af2e-862e87a1ba2d", 00:19:28.534 "base_bdev": "nvme0n1", 00:19:28.534 "thin_provision": true, 00:19:28.534 "num_allocated_clusters": 0, 00:19:28.534 "snapshot": false, 00:19:28.534 "clone": false, 00:19:28.534 "esnap_clone": false 00:19:28.534 } 00:19:28.534 } 00:19:28.534 } 00:19:28.534 ]' 00:19:28.534 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:28.534 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:28.534 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:28.534 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:28.534 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:28.534 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:28.534 14:12:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:28.534 14:12:30 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:28.534 14:12:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:28.796 14:12:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:28.796 14:12:30 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:28.796 14:12:30 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:28.796 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:28.796 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:28.796 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:28.796 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:28.796 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:29.058 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:29.058 { 00:19:29.058 "name": "925e0e6f-26ff-4b37-862e-e561eeec11c9", 00:19:29.058 "aliases": [ 00:19:29.058 "lvs/nvme0n1p0" 00:19:29.058 ], 00:19:29.058 "product_name": "Logical Volume", 00:19:29.058 "block_size": 4096, 00:19:29.058 "num_blocks": 26476544, 00:19:29.058 "uuid": "925e0e6f-26ff-4b37-862e-e561eeec11c9", 00:19:29.058 "assigned_rate_limits": { 00:19:29.058 "rw_ios_per_sec": 0, 00:19:29.058 "rw_mbytes_per_sec": 0, 00:19:29.058 "r_mbytes_per_sec": 0, 00:19:29.058 "w_mbytes_per_sec": 0 00:19:29.058 }, 00:19:29.058 "claimed": false, 00:19:29.058 "zoned": false, 00:19:29.058 "supported_io_types": { 00:19:29.058 "read": true, 00:19:29.058 "write": true, 00:19:29.058 "unmap": true, 00:19:29.058 "flush": false, 00:19:29.058 "reset": true, 00:19:29.058 "nvme_admin": false, 00:19:29.058 "nvme_io": false, 00:19:29.058 "nvme_io_md": false, 00:19:29.058 "write_zeroes": true, 00:19:29.058 "zcopy": false, 00:19:29.058 "get_zone_info": false, 00:19:29.058 "zone_management": false, 00:19:29.058 "zone_append": false, 00:19:29.058 "compare": false, 00:19:29.058 "compare_and_write": false, 00:19:29.058 "abort": false, 00:19:29.058 "seek_hole": true, 00:19:29.058 "seek_data": true, 00:19:29.058 "copy": false, 00:19:29.058 "nvme_iov_md": false 00:19:29.058 }, 00:19:29.058 "driver_specific": { 00:19:29.058 "lvol": { 00:19:29.058 "lvol_store_uuid": "513fc7ba-9068-4f8c-af2e-862e87a1ba2d", 00:19:29.058 "base_bdev": "nvme0n1", 00:19:29.058 "thin_provision": true, 00:19:29.058 "num_allocated_clusters": 0, 00:19:29.058 "snapshot": false, 00:19:29.058 "clone": false, 00:19:29.058 "esnap_clone": false 00:19:29.058 } 00:19:29.058 } 00:19:29.058 } 00:19:29.058 ]' 00:19:29.058 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:29.058 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:29.058 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:29.319 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:29.319 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:29.319 14:12:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:29.319 14:12:30 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:29.320 14:12:30 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:29.320 14:12:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:29.320 14:12:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:29.320 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:29.320 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:29.320 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:29.320 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:29.320 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 925e0e6f-26ff-4b37-862e-e561eeec11c9 00:19:29.580 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:29.580 { 00:19:29.580 "name": "925e0e6f-26ff-4b37-862e-e561eeec11c9", 00:19:29.580 "aliases": [ 00:19:29.580 "lvs/nvme0n1p0" 00:19:29.580 ], 00:19:29.580 "product_name": "Logical Volume", 00:19:29.580 "block_size": 4096, 00:19:29.580 "num_blocks": 26476544, 00:19:29.580 "uuid": "925e0e6f-26ff-4b37-862e-e561eeec11c9", 00:19:29.580 "assigned_rate_limits": { 00:19:29.580 "rw_ios_per_sec": 0, 00:19:29.580 "rw_mbytes_per_sec": 0, 00:19:29.580 "r_mbytes_per_sec": 0, 00:19:29.580 "w_mbytes_per_sec": 0 00:19:29.580 }, 00:19:29.581 "claimed": false, 00:19:29.581 "zoned": false, 00:19:29.581 "supported_io_types": { 00:19:29.581 "read": true, 00:19:29.581 "write": true, 00:19:29.581 "unmap": true, 00:19:29.581 "flush": false, 00:19:29.581 "reset": true, 00:19:29.581 "nvme_admin": false, 00:19:29.581 "nvme_io": false, 00:19:29.581 "nvme_io_md": false, 00:19:29.581 "write_zeroes": true, 00:19:29.581 "zcopy": false, 00:19:29.581 "get_zone_info": false, 00:19:29.581 "zone_management": false, 00:19:29.581 "zone_append": false, 00:19:29.581 "compare": false, 00:19:29.581 "compare_and_write": false, 00:19:29.581 "abort": false, 00:19:29.581 "seek_hole": true, 00:19:29.581 "seek_data": true, 00:19:29.581 "copy": false, 00:19:29.581 "nvme_iov_md": false 00:19:29.581 }, 00:19:29.581 "driver_specific": { 00:19:29.581 "lvol": { 00:19:29.581 "lvol_store_uuid": "513fc7ba-9068-4f8c-af2e-862e87a1ba2d", 00:19:29.581 "base_bdev": "nvme0n1", 00:19:29.581 "thin_provision": true, 00:19:29.581 "num_allocated_clusters": 0, 00:19:29.581 "snapshot": false, 00:19:29.581 "clone": false, 00:19:29.581 "esnap_clone": false 00:19:29.581 } 00:19:29.581 } 00:19:29.581 } 00:19:29.581 ]' 00:19:29.581 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:29.581 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:29.581 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:29.581 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:29.581 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:29.581 14:12:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:29.581 14:12:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:29.581 14:12:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 925e0e6f-26ff-4b37-862e-e561eeec11c9 -c nvc0n1p0 --l2p_dram_limit 20 00:19:29.843 [2024-12-09 14:12:31.555217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.555296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:29.843 [2024-12-09 14:12:31.555313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:29.843 [2024-12-09 14:12:31.555325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.555401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.555414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.843 [2024-12-09 14:12:31.555424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:29.843 [2024-12-09 14:12:31.555434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.555454] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:29.843 [2024-12-09 14:12:31.556327] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:29.843 [2024-12-09 14:12:31.556362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.556373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.843 [2024-12-09 14:12:31.556382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:19:29.843 [2024-12-09 14:12:31.556393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.556428] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ae69bc2d-4459-401b-89ad-d96e3274abd8 00:19:29.843 [2024-12-09 14:12:31.558286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.558336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:29.843 [2024-12-09 14:12:31.558355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:29.843 [2024-12-09 14:12:31.558364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.567608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.567659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.843 [2024-12-09 14:12:31.567674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.158 ms 00:19:29.843 [2024-12-09 14:12:31.567685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.567793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.567802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.843 [2024-12-09 14:12:31.567817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:29.843 [2024-12-09 14:12:31.567826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.567891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.567903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:29.843 [2024-12-09 14:12:31.567913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:29.843 [2024-12-09 14:12:31.567922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.567948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:29.843 [2024-12-09 14:12:31.572406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.572458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.843 [2024-12-09 14:12:31.572468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:19:29.843 [2024-12-09 14:12:31.572483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.572531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.572554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:29.843 [2024-12-09 14:12:31.572563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:29.843 [2024-12-09 14:12:31.572573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.572609] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:29.843 [2024-12-09 14:12:31.572769] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:29.843 [2024-12-09 14:12:31.572782] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:29.843 [2024-12-09 14:12:31.572795] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:29.843 [2024-12-09 14:12:31.572805] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:29.843 [2024-12-09 14:12:31.572817] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:29.843 [2024-12-09 14:12:31.572826] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:29.843 [2024-12-09 14:12:31.572835] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:29.843 [2024-12-09 14:12:31.572842] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:29.843 [2024-12-09 14:12:31.572853] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:29.843 [2024-12-09 14:12:31.572863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.572873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:29.843 [2024-12-09 14:12:31.572883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:29.843 [2024-12-09 14:12:31.572892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.572975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.843 [2024-12-09 14:12:31.572986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:29.843 [2024-12-09 14:12:31.572994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:29.843 [2024-12-09 14:12:31.573006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.843 [2024-12-09 14:12:31.573096] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:29.843 [2024-12-09 14:12:31.573126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:29.843 [2024-12-09 14:12:31.573134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.843 [2024-12-09 14:12:31.573145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.843 [2024-12-09 14:12:31.573153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:29.843 [2024-12-09 14:12:31.573162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:29.843 [2024-12-09 14:12:31.573168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:29.843 [2024-12-09 14:12:31.573177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:29.843 [2024-12-09 14:12:31.573184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:29.843 [2024-12-09 14:12:31.573194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.843 [2024-12-09 14:12:31.573200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:29.843 [2024-12-09 14:12:31.573219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:29.843 [2024-12-09 14:12:31.573226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.843 [2024-12-09 14:12:31.573235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:29.843 [2024-12-09 14:12:31.573260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:29.843 [2024-12-09 14:12:31.573271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.844 [2024-12-09 14:12:31.573278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:29.844 [2024-12-09 14:12:31.573287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:29.844 [2024-12-09 14:12:31.573293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.844 [2024-12-09 14:12:31.573306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:29.844 [2024-12-09 14:12:31.573314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:29.844 [2024-12-09 14:12:31.573323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.844 [2024-12-09 14:12:31.573330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:29.844 [2024-12-09 14:12:31.573339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:29.844 [2024-12-09 14:12:31.573347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.844 [2024-12-09 14:12:31.573356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:29.844 [2024-12-09 14:12:31.573366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:29.844 [2024-12-09 14:12:31.573375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.844 [2024-12-09 14:12:31.573382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:29.844 [2024-12-09 14:12:31.573391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:29.844 [2024-12-09 14:12:31.573398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.844 [2024-12-09 14:12:31.573410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:29.844 [2024-12-09 14:12:31.573417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:29.844 [2024-12-09 14:12:31.573425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.844 [2024-12-09 14:12:31.573431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:29.844 [2024-12-09 14:12:31.573442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:29.844 [2024-12-09 14:12:31.573449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.844 [2024-12-09 14:12:31.573459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:29.844 [2024-12-09 14:12:31.573465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:29.844 [2024-12-09 14:12:31.573474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.844 [2024-12-09 14:12:31.573480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:29.844 [2024-12-09 14:12:31.573489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:29.844 [2024-12-09 14:12:31.573496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.844 [2024-12-09 14:12:31.573503] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:29.844 [2024-12-09 14:12:31.573511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:29.844 [2024-12-09 14:12:31.573520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.844 [2024-12-09 14:12:31.573528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.844 [2024-12-09 14:12:31.573553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:29.844 [2024-12-09 14:12:31.573561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:29.844 [2024-12-09 14:12:31.573570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:29.844 [2024-12-09 14:12:31.573577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:29.844 [2024-12-09 14:12:31.573586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:29.844 [2024-12-09 14:12:31.573594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:29.844 [2024-12-09 14:12:31.573605] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:29.844 [2024-12-09 14:12:31.573615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.844 [2024-12-09 14:12:31.573626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:29.844 [2024-12-09 14:12:31.573633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:29.844 [2024-12-09 14:12:31.573643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:29.844 [2024-12-09 14:12:31.573650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:29.844 [2024-12-09 14:12:31.573660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:29.844 [2024-12-09 14:12:31.573667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:29.844 [2024-12-09 14:12:31.573677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:29.844 [2024-12-09 14:12:31.573684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:29.844 [2024-12-09 14:12:31.573697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:29.844 [2024-12-09 14:12:31.573703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:29.844 [2024-12-09 14:12:31.573712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:29.844 [2024-12-09 14:12:31.573720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:29.844 [2024-12-09 14:12:31.573730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:29.844 [2024-12-09 14:12:31.573738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:29.844 [2024-12-09 14:12:31.573754] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:29.844 [2024-12-09 14:12:31.573762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.844 [2024-12-09 14:12:31.573775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:29.844 [2024-12-09 14:12:31.573783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:29.844 [2024-12-09 14:12:31.573793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:29.844 [2024-12-09 14:12:31.573800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:29.844 [2024-12-09 14:12:31.573809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.844 [2024-12-09 14:12:31.573817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:29.844 [2024-12-09 14:12:31.573827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:19:29.844 [2024-12-09 14:12:31.573834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.844 [2024-12-09 14:12:31.573871] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:29.844 [2024-12-09 14:12:31.573880] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:35.138 [2024-12-09 14:12:36.086764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.086860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:35.138 [2024-12-09 14:12:36.086881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4512.868 ms 00:19:35.138 [2024-12-09 14:12:36.086891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.119126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.119193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.138 [2024-12-09 14:12:36.119210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.980 ms 00:19:35.138 [2024-12-09 14:12:36.119220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.119371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.119383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.138 [2024-12-09 14:12:36.119398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:35.138 [2024-12-09 14:12:36.119406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.167959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.168027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.138 [2024-12-09 14:12:36.168044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.493 ms 00:19:35.138 [2024-12-09 14:12:36.168052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.168103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.168113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.138 [2024-12-09 14:12:36.168125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:35.138 [2024-12-09 14:12:36.168136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.168796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.168835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.138 [2024-12-09 14:12:36.168849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:19:35.138 [2024-12-09 14:12:36.168857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.168985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.168995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.138 [2024-12-09 14:12:36.169009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:35.138 [2024-12-09 14:12:36.169017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.184868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.184915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.138 [2024-12-09 14:12:36.184929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.827 ms 00:19:35.138 [2024-12-09 14:12:36.184947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.198288] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:35.138 [2024-12-09 14:12:36.205608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.205656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.138 [2024-12-09 14:12:36.205668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.573 ms 00:19:35.138 [2024-12-09 14:12:36.205678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.301842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.301922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:35.138 [2024-12-09 14:12:36.301939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.133 ms 00:19:35.138 [2024-12-09 14:12:36.301951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.302164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.302182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.138 [2024-12-09 14:12:36.302192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:19:35.138 [2024-12-09 14:12:36.302206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.329039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.329101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:35.138 [2024-12-09 14:12:36.329116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.778 ms 00:19:35.138 [2024-12-09 14:12:36.329127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.354513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.354577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:35.138 [2024-12-09 14:12:36.354591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.334 ms 00:19:35.138 [2024-12-09 14:12:36.354601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.355220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.355267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.138 [2024-12-09 14:12:36.355277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:19:35.138 [2024-12-09 14:12:36.355287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.438289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.438355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:35.138 [2024-12-09 14:12:36.438370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.945 ms 00:19:35.138 [2024-12-09 14:12:36.438381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.466259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.466320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:35.138 [2024-12-09 14:12:36.466338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.786 ms 00:19:35.138 [2024-12-09 14:12:36.466348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.492058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.492117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:35.138 [2024-12-09 14:12:36.492131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.659 ms 00:19:35.138 [2024-12-09 14:12:36.492141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.518315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.518380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.138 [2024-12-09 14:12:36.518394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.125 ms 00:19:35.138 [2024-12-09 14:12:36.518405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.518460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.518475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.138 [2024-12-09 14:12:36.518485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:35.138 [2024-12-09 14:12:36.518496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.518602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.138 [2024-12-09 14:12:36.518617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.138 [2024-12-09 14:12:36.518626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:35.138 [2024-12-09 14:12:36.518636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.138 [2024-12-09 14:12:36.519922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4964.068 ms, result 0 00:19:35.138 { 00:19:35.138 "name": "ftl0", 00:19:35.138 "uuid": "ae69bc2d-4459-401b-89ad-d96e3274abd8" 00:19:35.138 } 00:19:35.138 14:12:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:35.138 14:12:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:35.138 14:12:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:35.138 14:12:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:35.139 [2024-12-09 14:12:36.856021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:35.139 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:35.139 Zero copy mechanism will not be used. 00:19:35.139 Running I/O for 4 seconds... 00:19:37.448 3078.00 IOPS, 204.40 MiB/s [2024-12-09T14:12:40.175Z] 3154.50 IOPS, 209.48 MiB/s [2024-12-09T14:12:41.108Z] 3159.67 IOPS, 209.82 MiB/s 00:19:39.314 Latency(us) 00:19:39.314 [2024-12-09T14:12:41.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.314 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:39.314 ftl0 : 4.00 3164.61 210.15 0.00 0.00 331.53 170.14 2117.32 00:19:39.314 [2024-12-09T14:12:41.108Z] =================================================================================================================== 00:19:39.314 [2024-12-09T14:12:41.108Z] Total : 3164.61 210.15 0.00 0.00 331.53 170.14 2117.32 00:19:39.314 [2024-12-09 14:12:40.865521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:39.314 { 00:19:39.314 "results": [ 00:19:39.314 { 00:19:39.314 "job": "ftl0", 00:19:39.314 "core_mask": "0x1", 00:19:39.314 "workload": "randwrite", 00:19:39.314 "status": "finished", 00:19:39.314 "queue_depth": 1, 00:19:39.314 "io_size": 69632, 00:19:39.314 "runtime": 4.000179, 00:19:39.314 "iops": 3164.608383774826, 00:19:39.314 "mibps": 210.14977548504703, 00:19:39.314 "io_failed": 0, 00:19:39.314 "io_timeout": 0, 00:19:39.314 "avg_latency_us": 331.53175253847974, 00:19:39.314 "min_latency_us": 170.14153846153846, 00:19:39.314 "max_latency_us": 2117.316923076923 00:19:39.314 } 00:19:39.314 ], 00:19:39.314 "core_count": 1 00:19:39.314 } 00:19:39.314 14:12:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:39.314 [2024-12-09 14:12:40.972605] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:39.314 Running I/O for 4 seconds... 00:19:41.639 11679.00 IOPS, 45.62 MiB/s [2024-12-09T14:12:44.016Z] 11383.00 IOPS, 44.46 MiB/s [2024-12-09T14:12:45.388Z] 11183.33 IOPS, 43.68 MiB/s [2024-12-09T14:12:45.388Z] 11109.25 IOPS, 43.40 MiB/s 00:19:43.594 Latency(us) 00:19:43.594 [2024-12-09T14:12:45.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.594 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:43.594 ftl0 : 4.02 11085.32 43.30 0.00 0.00 11519.15 252.06 31053.98 00:19:43.594 [2024-12-09T14:12:45.388Z] =================================================================================================================== 00:19:43.594 [2024-12-09T14:12:45.388Z] Total : 11085.32 43.30 0.00 0.00 11519.15 0.00 31053.98 00:19:43.594 [2024-12-09 14:12:45.001158] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:43.594 { 00:19:43.594 "results": [ 00:19:43.594 { 00:19:43.594 "job": "ftl0", 00:19:43.594 "core_mask": "0x1", 00:19:43.594 "workload": "randwrite", 00:19:43.594 "status": "finished", 00:19:43.594 "queue_depth": 128, 00:19:43.594 "io_size": 4096, 00:19:43.594 "runtime": 4.020182, 00:19:43.594 "iops": 11085.31902287011, 00:19:43.594 "mibps": 43.302027433086366, 00:19:43.594 "io_failed": 0, 00:19:43.594 "io_timeout": 0, 00:19:43.594 "avg_latency_us": 11519.14975539618, 00:19:43.594 "min_latency_us": 252.06153846153848, 00:19:43.594 "max_latency_us": 31053.98153846154 00:19:43.594 } 00:19:43.594 ], 00:19:43.594 "core_count": 1 00:19:43.594 } 00:19:43.594 14:12:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:43.594 [2024-12-09 14:12:45.102508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:43.594 Running I/O for 4 seconds... 00:19:45.460 8628.00 IOPS, 33.70 MiB/s [2024-12-09T14:12:48.189Z] 8746.50 IOPS, 34.17 MiB/s [2024-12-09T14:12:49.122Z] 8802.33 IOPS, 34.38 MiB/s [2024-12-09T14:12:49.382Z] 9228.50 IOPS, 36.05 MiB/s 00:19:47.588 Latency(us) 00:19:47.588 [2024-12-09T14:12:49.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.588 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:47.588 Verification LBA range: start 0x0 length 0x1400000 00:19:47.588 ftl0 : 4.01 9240.27 36.09 0.00 0.00 13804.38 226.86 23391.31 00:19:47.588 [2024-12-09T14:12:49.382Z] =================================================================================================================== 00:19:47.588 [2024-12-09T14:12:49.382Z] Total : 9240.27 36.09 0.00 0.00 13804.38 0.00 23391.31 00:19:47.588 { 00:19:47.588 "results": [ 00:19:47.588 { 00:19:47.588 "job": "ftl0", 00:19:47.588 "core_mask": "0x1", 00:19:47.588 "workload": "verify", 00:19:47.588 "status": "finished", 00:19:47.588 "verify_range": { 00:19:47.588 "start": 0, 00:19:47.588 "length": 20971520 00:19:47.588 }, 00:19:47.588 "queue_depth": 128, 00:19:47.588 "io_size": 4096, 00:19:47.588 "runtime": 4.008648, 00:19:47.588 "iops": 9240.272530788436, 00:19:47.588 "mibps": 36.09481457339233, 00:19:47.588 "io_failed": 0, 00:19:47.588 "io_timeout": 0, 00:19:47.588 "avg_latency_us": 13804.38163216228, 00:19:47.588 "min_latency_us": 226.85538461538462, 00:19:47.588 "max_latency_us": 23391.310769230768 00:19:47.588 } 00:19:47.588 ], 00:19:47.588 "core_count": 1 00:19:47.588 } 00:19:47.588 [2024-12-09 14:12:49.133691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:47.588 14:12:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:47.588 [2024-12-09 14:12:49.335387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.588 [2024-12-09 14:12:49.335442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:47.588 [2024-12-09 14:12:49.335456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:47.588 [2024-12-09 14:12:49.335466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.588 [2024-12-09 14:12:49.335493] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.588 [2024-12-09 14:12:49.338447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.588 [2024-12-09 14:12:49.338482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:47.588 [2024-12-09 14:12:49.338495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:19:47.588 [2024-12-09 14:12:49.338502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.588 [2024-12-09 14:12:49.340949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.588 [2024-12-09 14:12:49.340984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:47.588 [2024-12-09 14:12:49.341002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.419 ms 00:19:47.588 [2024-12-09 14:12:49.341009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.851 [2024-12-09 14:12:49.544428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.851 [2024-12-09 14:12:49.544479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:47.851 [2024-12-09 14:12:49.544499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 203.394 ms 00:19:47.851 [2024-12-09 14:12:49.544508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.851 [2024-12-09 14:12:49.550773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.851 [2024-12-09 14:12:49.550818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:47.851 [2024-12-09 14:12:49.550833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.209 ms 00:19:47.851 [2024-12-09 14:12:49.550845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.851 [2024-12-09 14:12:49.577149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.851 [2024-12-09 14:12:49.577195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:47.851 [2024-12-09 14:12:49.577210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.230 ms 00:19:47.851 [2024-12-09 14:12:49.577218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.851 [2024-12-09 14:12:49.594309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.851 [2024-12-09 14:12:49.594361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:47.851 [2024-12-09 14:12:49.594377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.012 ms 00:19:47.851 [2024-12-09 14:12:49.594386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.851 [2024-12-09 14:12:49.594569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.851 [2024-12-09 14:12:49.594583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:47.851 [2024-12-09 14:12:49.594597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:47.851 [2024-12-09 14:12:49.594605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.851 [2024-12-09 14:12:49.619747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.851 [2024-12-09 14:12:49.619790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:47.851 [2024-12-09 14:12:49.619805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.121 ms 00:19:47.851 [2024-12-09 14:12:49.619812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.113 [2024-12-09 14:12:49.645452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.113 [2024-12-09 14:12:49.645494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:48.113 [2024-12-09 14:12:49.645509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.587 ms 00:19:48.113 [2024-12-09 14:12:49.645517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.113 [2024-12-09 14:12:49.670401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.113 [2024-12-09 14:12:49.670445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:48.113 [2024-12-09 14:12:49.670460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.814 ms 00:19:48.113 [2024-12-09 14:12:49.670467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.113 [2024-12-09 14:12:49.695280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.113 [2024-12-09 14:12:49.695323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:48.113 [2024-12-09 14:12:49.695340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.708 ms 00:19:48.113 [2024-12-09 14:12:49.695348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.113 [2024-12-09 14:12:49.695396] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:48.113 [2024-12-09 14:12:49.695413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:48.113 [2024-12-09 14:12:49.695425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:48.113 [2024-12-09 14:12:49.695434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:48.113 [2024-12-09 14:12:49.695444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.695996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:48.114 [2024-12-09 14:12:49.696279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:48.115 [2024-12-09 14:12:49.696286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:48.115 [2024-12-09 14:12:49.696296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:48.115 [2024-12-09 14:12:49.696304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:48.115 [2024-12-09 14:12:49.696315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:48.115 [2024-12-09 14:12:49.696322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:48.115 [2024-12-09 14:12:49.696332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:48.115 [2024-12-09 14:12:49.696347] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:48.115 [2024-12-09 14:12:49.696357] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae69bc2d-4459-401b-89ad-d96e3274abd8 00:19:48.115 [2024-12-09 14:12:49.696368] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:48.115 [2024-12-09 14:12:49.696378] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:48.115 [2024-12-09 14:12:49.696385] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:48.115 [2024-12-09 14:12:49.696395] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:48.115 [2024-12-09 14:12:49.696403] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:48.115 [2024-12-09 14:12:49.696413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:48.115 [2024-12-09 14:12:49.696421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:48.115 [2024-12-09 14:12:49.696431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:48.115 [2024-12-09 14:12:49.696437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:48.115 [2024-12-09 14:12:49.696447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.115 [2024-12-09 14:12:49.696455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:48.115 [2024-12-09 14:12:49.696467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:19:48.115 [2024-12-09 14:12:49.696475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.115 [2024-12-09 14:12:49.710269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.115 [2024-12-09 14:12:49.710308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:48.115 [2024-12-09 14:12:49.710322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.751 ms 00:19:48.115 [2024-12-09 14:12:49.710330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.115 [2024-12-09 14:12:49.710768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.115 [2024-12-09 14:12:49.710786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:48.115 [2024-12-09 14:12:49.710797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:19:48.115 [2024-12-09 14:12:49.710806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.115 [2024-12-09 14:12:49.749713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.115 [2024-12-09 14:12:49.749758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.115 [2024-12-09 14:12:49.749774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.115 [2024-12-09 14:12:49.749783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.115 [2024-12-09 14:12:49.749844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.115 [2024-12-09 14:12:49.749853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.115 [2024-12-09 14:12:49.749864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.115 [2024-12-09 14:12:49.749872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.115 [2024-12-09 14:12:49.749978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.115 [2024-12-09 14:12:49.749988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.115 [2024-12-09 14:12:49.749999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.115 [2024-12-09 14:12:49.750007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.115 [2024-12-09 14:12:49.750025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.115 [2024-12-09 14:12:49.750034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.115 [2024-12-09 14:12:49.750044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.115 [2024-12-09 14:12:49.750051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.115 [2024-12-09 14:12:49.834995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.115 [2024-12-09 14:12:49.835049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.115 [2024-12-09 14:12:49.835068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.115 [2024-12-09 14:12:49.835075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.377 [2024-12-09 14:12:49.905006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.377 [2024-12-09 14:12:49.905060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.377 [2024-12-09 14:12:49.905075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.377 [2024-12-09 14:12:49.905084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.377 [2024-12-09 14:12:49.905210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.377 [2024-12-09 14:12:49.905221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.377 [2024-12-09 14:12:49.905252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.377 [2024-12-09 14:12:49.905261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.377 [2024-12-09 14:12:49.905329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.377 [2024-12-09 14:12:49.905340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.377 [2024-12-09 14:12:49.905351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.377 [2024-12-09 14:12:49.905359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.377 [2024-12-09 14:12:49.905460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.377 [2024-12-09 14:12:49.905473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.377 [2024-12-09 14:12:49.905486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.377 [2024-12-09 14:12:49.905496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.377 [2024-12-09 14:12:49.905530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.377 [2024-12-09 14:12:49.905567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:48.377 [2024-12-09 14:12:49.905578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.377 [2024-12-09 14:12:49.905587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.377 [2024-12-09 14:12:49.905629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.377 [2024-12-09 14:12:49.905642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.377 [2024-12-09 14:12:49.905652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.377 [2024-12-09 14:12:49.905668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.377 [2024-12-09 14:12:49.905718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.377 [2024-12-09 14:12:49.905729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.377 [2024-12-09 14:12:49.905739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.377 [2024-12-09 14:12:49.905748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.377 [2024-12-09 14:12:49.905896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 570.458 ms, result 0 00:19:48.377 true 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75964 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75964 ']' 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75964 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75964 00:19:48.377 killing process with pid 75964 00:19:48.377 Received shutdown signal, test time was about 4.000000 seconds 00:19:48.377 00:19:48.377 Latency(us) 00:19:48.377 [2024-12-09T14:12:50.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.377 [2024-12-09T14:12:50.171Z] =================================================================================================================== 00:19:48.377 [2024-12-09T14:12:50.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75964' 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75964 00:19:48.377 14:12:49 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75964 00:19:58.372 Remove shared memory files 00:19:58.372 14:12:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:58.372 14:12:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:58.372 14:12:59 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:58.373 14:12:59 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:58.373 14:12:59 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:58.373 14:12:59 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:58.373 14:12:59 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:58.373 14:12:59 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:58.373 00:19:58.373 real 0m32.542s 00:19:58.373 user 0m35.223s 00:19:58.373 sys 0m1.144s 00:19:58.373 ************************************ 00:19:58.373 14:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.373 14:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:58.373 END TEST ftl_bdevperf 00:19:58.373 ************************************ 00:19:58.373 14:12:59 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:58.373 14:12:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:58.373 14:12:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.373 14:12:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:58.373 ************************************ 00:19:58.373 START TEST ftl_trim 00:19:58.373 ************************************ 00:19:58.373 14:12:59 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:58.373 * Looking for test storage... 00:19:58.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:58.373 14:12:59 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:58.373 14:12:59 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:58.373 14:12:59 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.373 14:13:00 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:58.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.373 --rc genhtml_branch_coverage=1 00:19:58.373 --rc genhtml_function_coverage=1 00:19:58.373 --rc genhtml_legend=1 00:19:58.373 --rc geninfo_all_blocks=1 00:19:58.373 --rc geninfo_unexecuted_blocks=1 00:19:58.373 00:19:58.373 ' 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:58.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.373 --rc genhtml_branch_coverage=1 00:19:58.373 --rc genhtml_function_coverage=1 00:19:58.373 --rc genhtml_legend=1 00:19:58.373 --rc geninfo_all_blocks=1 00:19:58.373 --rc geninfo_unexecuted_blocks=1 00:19:58.373 00:19:58.373 ' 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:58.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.373 --rc genhtml_branch_coverage=1 00:19:58.373 --rc genhtml_function_coverage=1 00:19:58.373 --rc genhtml_legend=1 00:19:58.373 --rc geninfo_all_blocks=1 00:19:58.373 --rc geninfo_unexecuted_blocks=1 00:19:58.373 00:19:58.373 ' 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:58.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.373 --rc genhtml_branch_coverage=1 00:19:58.373 --rc genhtml_function_coverage=1 00:19:58.373 --rc genhtml_legend=1 00:19:58.373 --rc geninfo_all_blocks=1 00:19:58.373 --rc geninfo_unexecuted_blocks=1 00:19:58.373 00:19:58.373 ' 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:58.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76337 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76337 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76337 ']' 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.373 14:13:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:58.373 14:13:00 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:58.633 [2024-12-09 14:13:00.184964] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:19:58.634 [2024-12-09 14:13:00.185110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76337 ] 00:19:58.634 [2024-12-09 14:13:00.348118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:58.894 [2024-12-09 14:13:00.461762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.894 [2024-12-09 14:13:00.462018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.894 [2024-12-09 14:13:00.462115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.464 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.464 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:59.464 14:13:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:59.464 14:13:01 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:59.465 14:13:01 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:59.465 14:13:01 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:59.465 14:13:01 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:59.465 14:13:01 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:59.725 14:13:01 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:59.726 14:13:01 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:59.726 14:13:01 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:59.726 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:59.726 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:59.726 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:59.726 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:59.726 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:00.085 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:00.085 { 00:20:00.085 "name": "nvme0n1", 00:20:00.085 "aliases": [ 00:20:00.085 "f6c8da28-3ced-4bcf-9561-461348d64f4d" 00:20:00.085 ], 00:20:00.085 "product_name": "NVMe disk", 00:20:00.085 "block_size": 4096, 00:20:00.085 "num_blocks": 1310720, 00:20:00.085 "uuid": "f6c8da28-3ced-4bcf-9561-461348d64f4d", 00:20:00.085 "numa_id": -1, 00:20:00.085 "assigned_rate_limits": { 00:20:00.085 "rw_ios_per_sec": 0, 00:20:00.085 "rw_mbytes_per_sec": 0, 00:20:00.085 "r_mbytes_per_sec": 0, 00:20:00.085 "w_mbytes_per_sec": 0 00:20:00.085 }, 00:20:00.085 "claimed": true, 00:20:00.085 "claim_type": "read_many_write_one", 00:20:00.085 "zoned": false, 00:20:00.085 "supported_io_types": { 00:20:00.085 "read": true, 00:20:00.085 "write": true, 00:20:00.085 "unmap": true, 00:20:00.085 "flush": true, 00:20:00.085 "reset": true, 00:20:00.085 "nvme_admin": true, 00:20:00.085 "nvme_io": true, 00:20:00.085 "nvme_io_md": false, 00:20:00.085 "write_zeroes": true, 00:20:00.085 "zcopy": false, 00:20:00.085 "get_zone_info": false, 00:20:00.085 "zone_management": false, 00:20:00.085 "zone_append": false, 00:20:00.085 "compare": true, 00:20:00.085 "compare_and_write": false, 00:20:00.085 "abort": true, 00:20:00.085 "seek_hole": false, 00:20:00.085 "seek_data": false, 00:20:00.085 "copy": true, 00:20:00.085 "nvme_iov_md": false 00:20:00.086 }, 00:20:00.086 "driver_specific": { 00:20:00.086 "nvme": [ 00:20:00.086 { 00:20:00.086 "pci_address": "0000:00:11.0", 00:20:00.086 "trid": { 00:20:00.086 "trtype": "PCIe", 00:20:00.086 "traddr": "0000:00:11.0" 00:20:00.086 }, 00:20:00.086 "ctrlr_data": { 00:20:00.086 "cntlid": 0, 00:20:00.086 "vendor_id": "0x1b36", 00:20:00.086 "model_number": "QEMU NVMe Ctrl", 00:20:00.086 "serial_number": "12341", 00:20:00.086 "firmware_revision": "8.0.0", 00:20:00.086 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:00.086 "oacs": { 00:20:00.086 "security": 0, 00:20:00.086 "format": 1, 00:20:00.086 "firmware": 0, 00:20:00.086 "ns_manage": 1 00:20:00.086 }, 00:20:00.086 "multi_ctrlr": false, 00:20:00.086 "ana_reporting": false 00:20:00.086 }, 00:20:00.086 "vs": { 00:20:00.086 "nvme_version": "1.4" 00:20:00.086 }, 00:20:00.086 "ns_data": { 00:20:00.086 "id": 1, 00:20:00.086 "can_share": false 00:20:00.086 } 00:20:00.086 } 00:20:00.086 ], 00:20:00.086 "mp_policy": "active_passive" 00:20:00.086 } 00:20:00.086 } 00:20:00.086 ]' 00:20:00.086 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:00.086 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:00.086 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:00.086 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:00.086 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:00.086 14:13:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:20:00.086 14:13:01 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:00.086 14:13:01 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:00.086 14:13:01 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:00.086 14:13:01 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:00.086 14:13:01 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:00.344 14:13:01 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=513fc7ba-9068-4f8c-af2e-862e87a1ba2d 00:20:00.344 14:13:01 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:00.344 14:13:01 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 513fc7ba-9068-4f8c-af2e-862e87a1ba2d 00:20:00.344 14:13:02 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:00.603 14:13:02 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7fff9980-7f60-4cfc-9a92-05e41fbdd885 00:20:00.603 14:13:02 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7fff9980-7f60-4cfc-9a92-05e41fbdd885 00:20:00.861 14:13:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:00.861 14:13:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:00.861 14:13:02 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:00.861 14:13:02 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:00.861 14:13:02 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:00.861 14:13:02 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:00.861 14:13:02 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:00.861 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:00.861 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:00.861 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:00.861 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:00.861 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:01.119 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:01.119 { 00:20:01.119 "name": "ef93d3b7-ec50-4e5c-936e-1721924a7675", 00:20:01.119 "aliases": [ 00:20:01.119 "lvs/nvme0n1p0" 00:20:01.119 ], 00:20:01.120 "product_name": "Logical Volume", 00:20:01.120 "block_size": 4096, 00:20:01.120 "num_blocks": 26476544, 00:20:01.120 "uuid": "ef93d3b7-ec50-4e5c-936e-1721924a7675", 00:20:01.120 "assigned_rate_limits": { 00:20:01.120 "rw_ios_per_sec": 0, 00:20:01.120 "rw_mbytes_per_sec": 0, 00:20:01.120 "r_mbytes_per_sec": 0, 00:20:01.120 "w_mbytes_per_sec": 0 00:20:01.120 }, 00:20:01.120 "claimed": false, 00:20:01.120 "zoned": false, 00:20:01.120 "supported_io_types": { 00:20:01.120 "read": true, 00:20:01.120 "write": true, 00:20:01.120 "unmap": true, 00:20:01.120 "flush": false, 00:20:01.120 "reset": true, 00:20:01.120 "nvme_admin": false, 00:20:01.120 "nvme_io": false, 00:20:01.120 "nvme_io_md": false, 00:20:01.120 "write_zeroes": true, 00:20:01.120 "zcopy": false, 00:20:01.120 "get_zone_info": false, 00:20:01.120 "zone_management": false, 00:20:01.120 "zone_append": false, 00:20:01.120 "compare": false, 00:20:01.120 "compare_and_write": false, 00:20:01.120 "abort": false, 00:20:01.120 "seek_hole": true, 00:20:01.120 "seek_data": true, 00:20:01.120 "copy": false, 00:20:01.120 "nvme_iov_md": false 00:20:01.120 }, 00:20:01.120 "driver_specific": { 00:20:01.120 "lvol": { 00:20:01.120 "lvol_store_uuid": "7fff9980-7f60-4cfc-9a92-05e41fbdd885", 00:20:01.120 "base_bdev": "nvme0n1", 00:20:01.120 "thin_provision": true, 00:20:01.120 "num_allocated_clusters": 0, 00:20:01.120 "snapshot": false, 00:20:01.120 "clone": false, 00:20:01.120 "esnap_clone": false 00:20:01.120 } 00:20:01.120 } 00:20:01.120 } 00:20:01.120 ]' 00:20:01.120 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:01.120 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:01.120 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:01.120 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:01.120 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:01.120 14:13:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:01.120 14:13:02 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:01.120 14:13:02 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:01.120 14:13:02 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:01.378 14:13:03 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:01.378 14:13:03 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:01.378 14:13:03 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:01.378 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:01.378 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:01.378 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:01.378 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:01.378 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:01.636 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:01.636 { 00:20:01.636 "name": "ef93d3b7-ec50-4e5c-936e-1721924a7675", 00:20:01.637 "aliases": [ 00:20:01.637 "lvs/nvme0n1p0" 00:20:01.637 ], 00:20:01.637 "product_name": "Logical Volume", 00:20:01.637 "block_size": 4096, 00:20:01.637 "num_blocks": 26476544, 00:20:01.637 "uuid": "ef93d3b7-ec50-4e5c-936e-1721924a7675", 00:20:01.637 "assigned_rate_limits": { 00:20:01.637 "rw_ios_per_sec": 0, 00:20:01.637 "rw_mbytes_per_sec": 0, 00:20:01.637 "r_mbytes_per_sec": 0, 00:20:01.637 "w_mbytes_per_sec": 0 00:20:01.637 }, 00:20:01.637 "claimed": false, 00:20:01.637 "zoned": false, 00:20:01.637 "supported_io_types": { 00:20:01.637 "read": true, 00:20:01.637 "write": true, 00:20:01.637 "unmap": true, 00:20:01.637 "flush": false, 00:20:01.637 "reset": true, 00:20:01.637 "nvme_admin": false, 00:20:01.637 "nvme_io": false, 00:20:01.637 "nvme_io_md": false, 00:20:01.637 "write_zeroes": true, 00:20:01.637 "zcopy": false, 00:20:01.637 "get_zone_info": false, 00:20:01.637 "zone_management": false, 00:20:01.637 "zone_append": false, 00:20:01.637 "compare": false, 00:20:01.637 "compare_and_write": false, 00:20:01.637 "abort": false, 00:20:01.637 "seek_hole": true, 00:20:01.637 "seek_data": true, 00:20:01.637 "copy": false, 00:20:01.637 "nvme_iov_md": false 00:20:01.637 }, 00:20:01.637 "driver_specific": { 00:20:01.637 "lvol": { 00:20:01.637 "lvol_store_uuid": "7fff9980-7f60-4cfc-9a92-05e41fbdd885", 00:20:01.637 "base_bdev": "nvme0n1", 00:20:01.637 "thin_provision": true, 00:20:01.637 "num_allocated_clusters": 0, 00:20:01.637 "snapshot": false, 00:20:01.637 "clone": false, 00:20:01.637 "esnap_clone": false 00:20:01.637 } 00:20:01.637 } 00:20:01.637 } 00:20:01.637 ]' 00:20:01.637 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:01.637 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:01.637 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:01.637 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:01.637 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:01.637 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:01.637 14:13:03 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:01.637 14:13:03 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:01.895 14:13:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:01.895 14:13:03 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:01.895 14:13:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:01.895 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:01.895 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:01.895 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:01.895 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:01.895 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef93d3b7-ec50-4e5c-936e-1721924a7675 00:20:02.154 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:02.154 { 00:20:02.154 "name": "ef93d3b7-ec50-4e5c-936e-1721924a7675", 00:20:02.154 "aliases": [ 00:20:02.154 "lvs/nvme0n1p0" 00:20:02.154 ], 00:20:02.154 "product_name": "Logical Volume", 00:20:02.154 "block_size": 4096, 00:20:02.154 "num_blocks": 26476544, 00:20:02.154 "uuid": "ef93d3b7-ec50-4e5c-936e-1721924a7675", 00:20:02.154 "assigned_rate_limits": { 00:20:02.154 "rw_ios_per_sec": 0, 00:20:02.154 "rw_mbytes_per_sec": 0, 00:20:02.154 "r_mbytes_per_sec": 0, 00:20:02.154 "w_mbytes_per_sec": 0 00:20:02.154 }, 00:20:02.154 "claimed": false, 00:20:02.154 "zoned": false, 00:20:02.154 "supported_io_types": { 00:20:02.154 "read": true, 00:20:02.154 "write": true, 00:20:02.154 "unmap": true, 00:20:02.154 "flush": false, 00:20:02.154 "reset": true, 00:20:02.154 "nvme_admin": false, 00:20:02.154 "nvme_io": false, 00:20:02.154 "nvme_io_md": false, 00:20:02.154 "write_zeroes": true, 00:20:02.154 "zcopy": false, 00:20:02.154 "get_zone_info": false, 00:20:02.154 "zone_management": false, 00:20:02.154 "zone_append": false, 00:20:02.154 "compare": false, 00:20:02.154 "compare_and_write": false, 00:20:02.154 "abort": false, 00:20:02.154 "seek_hole": true, 00:20:02.154 "seek_data": true, 00:20:02.154 "copy": false, 00:20:02.154 "nvme_iov_md": false 00:20:02.154 }, 00:20:02.154 "driver_specific": { 00:20:02.154 "lvol": { 00:20:02.154 "lvol_store_uuid": "7fff9980-7f60-4cfc-9a92-05e41fbdd885", 00:20:02.154 "base_bdev": "nvme0n1", 00:20:02.154 "thin_provision": true, 00:20:02.154 "num_allocated_clusters": 0, 00:20:02.154 "snapshot": false, 00:20:02.154 "clone": false, 00:20:02.154 "esnap_clone": false 00:20:02.154 } 00:20:02.154 } 00:20:02.154 } 00:20:02.154 ]' 00:20:02.154 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:02.154 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:02.154 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:02.154 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:02.154 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:02.154 14:13:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:02.154 14:13:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:02.154 14:13:03 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ef93d3b7-ec50-4e5c-936e-1721924a7675 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:02.416 [2024-12-09 14:13:03.963946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.963986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:02.416 [2024-12-09 14:13:03.963999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:02.416 [2024-12-09 14:13:03.964006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.966241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.966379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.416 [2024-12-09 14:13:03.966395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.214 ms 00:20:02.416 [2024-12-09 14:13:03.966401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.966557] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:02.416 [2024-12-09 14:13:03.967133] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:02.416 [2024-12-09 14:13:03.967151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.967158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.416 [2024-12-09 14:13:03.967166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:20:02.416 [2024-12-09 14:13:03.967172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.967239] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 34e99555-49f3-4d3d-b544-1318be7f7bb8 00:20:02.416 [2024-12-09 14:13:03.968166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.968187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:02.416 [2024-12-09 14:13:03.968195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:02.416 [2024-12-09 14:13:03.968203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.973087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.973180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.416 [2024-12-09 14:13:03.973237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.829 ms 00:20:02.416 [2024-12-09 14:13:03.973257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.973369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.973438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.416 [2024-12-09 14:13:03.973458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:02.416 [2024-12-09 14:13:03.973477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.973512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.973534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:02.416 [2024-12-09 14:13:03.973568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:02.416 [2024-12-09 14:13:03.973586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.973664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:02.416 [2024-12-09 14:13:03.976480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.976577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.416 [2024-12-09 14:13:03.976628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.818 ms 00:20:02.416 [2024-12-09 14:13:03.976646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.976703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.976807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:02.416 [2024-12-09 14:13:03.976828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:02.416 [2024-12-09 14:13:03.976843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.976879] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:02.416 [2024-12-09 14:13:03.977000] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:02.416 [2024-12-09 14:13:03.977121] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:02.416 [2024-12-09 14:13:03.977151] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:02.416 [2024-12-09 14:13:03.977180] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:02.416 [2024-12-09 14:13:03.977205] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:02.416 [2024-12-09 14:13:03.977236] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:02.416 [2024-12-09 14:13:03.977251] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:02.416 [2024-12-09 14:13:03.977268] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:02.416 [2024-12-09 14:13:03.977323] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:02.416 [2024-12-09 14:13:03.977343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.977358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:02.416 [2024-12-09 14:13:03.977375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:20:02.416 [2024-12-09 14:13:03.977389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.977488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.416 [2024-12-09 14:13:03.977510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:02.416 [2024-12-09 14:13:03.977527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:02.416 [2024-12-09 14:13:03.977551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.416 [2024-12-09 14:13:03.977654] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:02.416 [2024-12-09 14:13:03.977675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:02.416 [2024-12-09 14:13:03.977693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.416 [2024-12-09 14:13:03.977708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.416 [2024-12-09 14:13:03.977752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:02.416 [2024-12-09 14:13:03.977823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:02.416 [2024-12-09 14:13:03.977879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:02.416 [2024-12-09 14:13:03.977897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:02.416 [2024-12-09 14:13:03.977913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:02.416 [2024-12-09 14:13:03.978007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.416 [2024-12-09 14:13:03.978026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:02.417 [2024-12-09 14:13:03.978041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:02.417 [2024-12-09 14:13:03.978058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.417 [2024-12-09 14:13:03.978072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:02.417 [2024-12-09 14:13:03.978118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:02.417 [2024-12-09 14:13:03.978135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.417 [2024-12-09 14:13:03.978153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:02.417 [2024-12-09 14:13:03.978167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:02.417 [2024-12-09 14:13:03.978182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.417 [2024-12-09 14:13:03.978219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:02.417 [2024-12-09 14:13:03.978238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:02.417 [2024-12-09 14:13:03.978282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.417 [2024-12-09 14:13:03.978301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:02.417 [2024-12-09 14:13:03.978315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:02.417 [2024-12-09 14:13:03.978348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.417 [2024-12-09 14:13:03.978364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:02.417 [2024-12-09 14:13:03.978380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:02.417 [2024-12-09 14:13:03.978394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.417 [2024-12-09 14:13:03.978433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:02.417 [2024-12-09 14:13:03.978450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:02.417 [2024-12-09 14:13:03.978465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.417 [2024-12-09 14:13:03.978479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:02.417 [2024-12-09 14:13:03.978496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:02.417 [2024-12-09 14:13:03.978533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.417 [2024-12-09 14:13:03.978577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:02.417 [2024-12-09 14:13:03.978593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:02.417 [2024-12-09 14:13:03.978625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.417 [2024-12-09 14:13:03.978641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:02.417 [2024-12-09 14:13:03.978659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:02.417 [2024-12-09 14:13:03.978673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.417 [2024-12-09 14:13:03.978711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:02.417 [2024-12-09 14:13:03.978728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:02.417 [2024-12-09 14:13:03.978744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.417 [2024-12-09 14:13:03.978758] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:02.417 [2024-12-09 14:13:03.978773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:02.417 [2024-12-09 14:13:03.978811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.417 [2024-12-09 14:13:03.978830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.417 [2024-12-09 14:13:03.978922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:02.417 [2024-12-09 14:13:03.978943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:02.417 [2024-12-09 14:13:03.978957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:02.417 [2024-12-09 14:13:03.978973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:02.417 [2024-12-09 14:13:03.978987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:02.417 [2024-12-09 14:13:03.979002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:02.417 [2024-12-09 14:13:03.979097] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:02.417 [2024-12-09 14:13:03.979129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.417 [2024-12-09 14:13:03.979157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:02.417 [2024-12-09 14:13:03.979212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:02.417 [2024-12-09 14:13:03.979235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:02.417 [2024-12-09 14:13:03.979259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:02.417 [2024-12-09 14:13:03.979310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:02.417 [2024-12-09 14:13:03.979385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:02.417 [2024-12-09 14:13:03.979409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:02.417 [2024-12-09 14:13:03.979432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:02.417 [2024-12-09 14:13:03.979454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:02.417 [2024-12-09 14:13:03.979510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:02.417 [2024-12-09 14:13:03.979545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:02.417 [2024-12-09 14:13:03.979570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:02.417 [2024-12-09 14:13:03.979592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:02.417 [2024-12-09 14:13:03.979615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:02.417 [2024-12-09 14:13:03.979673] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:02.417 [2024-12-09 14:13:03.979701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.417 [2024-12-09 14:13:03.979723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:02.417 [2024-12-09 14:13:03.979747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:02.417 [2024-12-09 14:13:03.979801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:02.417 [2024-12-09 14:13:03.979829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:02.417 [2024-12-09 14:13:03.979852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.417 [2024-12-09 14:13:03.979869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:02.417 [2024-12-09 14:13:03.979884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.252 ms 00:20:02.417 [2024-12-09 14:13:03.979929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.417 [2024-12-09 14:13:03.979997] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:02.417 [2024-12-09 14:13:03.980028] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:05.705 [2024-12-09 14:13:06.789905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.790112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:05.705 [2024-12-09 14:13:06.790192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2809.896 ms 00:20:05.705 [2024-12-09 14:13:06.790223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.815456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.815646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.705 [2024-12-09 14:13:06.815712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.945 ms 00:20:05.705 [2024-12-09 14:13:06.815738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.815889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.816052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:05.705 [2024-12-09 14:13:06.816092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:05.705 [2024-12-09 14:13:06.816116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.859594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.859742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.705 [2024-12-09 14:13:06.859760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.436 ms 00:20:05.705 [2024-12-09 14:13:06.859772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.859847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.859860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.705 [2024-12-09 14:13:06.859869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:05.705 [2024-12-09 14:13:06.859877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.860176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.860200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.705 [2024-12-09 14:13:06.860209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:20:05.705 [2024-12-09 14:13:06.860218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.860330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.860343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.705 [2024-12-09 14:13:06.860362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:05.705 [2024-12-09 14:13:06.860373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.874464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.874494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.705 [2024-12-09 14:13:06.874504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.059 ms 00:20:05.705 [2024-12-09 14:13:06.874514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.885848] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:05.705 [2024-12-09 14:13:06.899754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.899866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:05.705 [2024-12-09 14:13:06.899921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.136 ms 00:20:05.705 [2024-12-09 14:13:06.899944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.971488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.971639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:05.705 [2024-12-09 14:13:06.971694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.464 ms 00:20:05.705 [2024-12-09 14:13:06.971717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.971962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.972001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:05.705 [2024-12-09 14:13:06.972063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:20:05.705 [2024-12-09 14:13:06.972090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:06.995222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:06.995324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:05.705 [2024-12-09 14:13:06.995375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.084 ms 00:20:05.705 [2024-12-09 14:13:06.995398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:07.018004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:07.018102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:05.705 [2024-12-09 14:13:07.018162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.539 ms 00:20:05.705 [2024-12-09 14:13:07.018182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:07.018791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.705 [2024-12-09 14:13:07.018871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:05.705 [2024-12-09 14:13:07.018946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:20:05.705 [2024-12-09 14:13:07.018968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.705 [2024-12-09 14:13:07.093545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.706 [2024-12-09 14:13:07.093674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:05.706 [2024-12-09 14:13:07.093733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.457 ms 00:20:05.706 [2024-12-09 14:13:07.093745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.706 [2024-12-09 14:13:07.118022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.706 [2024-12-09 14:13:07.118052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:05.706 [2024-12-09 14:13:07.118066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.190 ms 00:20:05.706 [2024-12-09 14:13:07.118074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.706 [2024-12-09 14:13:07.141147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.706 [2024-12-09 14:13:07.141270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:05.706 [2024-12-09 14:13:07.141289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.016 ms 00:20:05.706 [2024-12-09 14:13:07.141298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.706 [2024-12-09 14:13:07.164363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.706 [2024-12-09 14:13:07.164474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:05.706 [2024-12-09 14:13:07.164525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.996 ms 00:20:05.706 [2024-12-09 14:13:07.164566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.706 [2024-12-09 14:13:07.164633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.706 [2024-12-09 14:13:07.164666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:05.706 [2024-12-09 14:13:07.164708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:05.706 [2024-12-09 14:13:07.164738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.706 [2024-12-09 14:13:07.164824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.706 [2024-12-09 14:13:07.164916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:05.706 [2024-12-09 14:13:07.164938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:05.706 [2024-12-09 14:13:07.164957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.706 [2024-12-09 14:13:07.165698] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:05.706 [2024-12-09 14:13:07.168664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3201.476 ms, result 0 00:20:05.706 [2024-12-09 14:13:07.169353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.706 { 00:20:05.706 "name": "ftl0", 00:20:05.706 "uuid": "34e99555-49f3-4d3d-b544-1318be7f7bb8" 00:20:05.706 } 00:20:05.706 14:13:07 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:05.706 14:13:07 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:05.706 14:13:07 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:05.706 14:13:07 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:05.706 14:13:07 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:05.706 14:13:07 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:05.706 14:13:07 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:05.706 14:13:07 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:05.964 [ 00:20:05.964 { 00:20:05.964 "name": "ftl0", 00:20:05.964 "aliases": [ 00:20:05.964 "34e99555-49f3-4d3d-b544-1318be7f7bb8" 00:20:05.964 ], 00:20:05.964 "product_name": "FTL disk", 00:20:05.964 "block_size": 4096, 00:20:05.964 "num_blocks": 23592960, 00:20:05.964 "uuid": "34e99555-49f3-4d3d-b544-1318be7f7bb8", 00:20:05.964 "assigned_rate_limits": { 00:20:05.964 "rw_ios_per_sec": 0, 00:20:05.964 "rw_mbytes_per_sec": 0, 00:20:05.964 "r_mbytes_per_sec": 0, 00:20:05.964 "w_mbytes_per_sec": 0 00:20:05.964 }, 00:20:05.964 "claimed": false, 00:20:05.964 "zoned": false, 00:20:05.964 "supported_io_types": { 00:20:05.964 "read": true, 00:20:05.964 "write": true, 00:20:05.964 "unmap": true, 00:20:05.964 "flush": true, 00:20:05.964 "reset": false, 00:20:05.964 "nvme_admin": false, 00:20:05.964 "nvme_io": false, 00:20:05.964 "nvme_io_md": false, 00:20:05.964 "write_zeroes": true, 00:20:05.964 "zcopy": false, 00:20:05.964 "get_zone_info": false, 00:20:05.964 "zone_management": false, 00:20:05.964 "zone_append": false, 00:20:05.964 "compare": false, 00:20:05.964 "compare_and_write": false, 00:20:05.964 "abort": false, 00:20:05.964 "seek_hole": false, 00:20:05.964 "seek_data": false, 00:20:05.964 "copy": false, 00:20:05.964 "nvme_iov_md": false 00:20:05.964 }, 00:20:05.964 "driver_specific": { 00:20:05.964 "ftl": { 00:20:05.964 "base_bdev": "ef93d3b7-ec50-4e5c-936e-1721924a7675", 00:20:05.964 "cache": "nvc0n1p0" 00:20:05.964 } 00:20:05.964 } 00:20:05.964 } 00:20:05.964 ] 00:20:05.964 14:13:07 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:05.964 14:13:07 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:05.964 14:13:07 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:06.223 14:13:07 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:06.223 14:13:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:06.223 14:13:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:06.223 { 00:20:06.223 "name": "ftl0", 00:20:06.223 "aliases": [ 00:20:06.223 "34e99555-49f3-4d3d-b544-1318be7f7bb8" 00:20:06.223 ], 00:20:06.223 "product_name": "FTL disk", 00:20:06.223 "block_size": 4096, 00:20:06.223 "num_blocks": 23592960, 00:20:06.223 "uuid": "34e99555-49f3-4d3d-b544-1318be7f7bb8", 00:20:06.223 "assigned_rate_limits": { 00:20:06.223 "rw_ios_per_sec": 0, 00:20:06.223 "rw_mbytes_per_sec": 0, 00:20:06.223 "r_mbytes_per_sec": 0, 00:20:06.223 "w_mbytes_per_sec": 0 00:20:06.223 }, 00:20:06.223 "claimed": false, 00:20:06.223 "zoned": false, 00:20:06.223 "supported_io_types": { 00:20:06.223 "read": true, 00:20:06.223 "write": true, 00:20:06.223 "unmap": true, 00:20:06.223 "flush": true, 00:20:06.223 "reset": false, 00:20:06.223 "nvme_admin": false, 00:20:06.223 "nvme_io": false, 00:20:06.223 "nvme_io_md": false, 00:20:06.223 "write_zeroes": true, 00:20:06.223 "zcopy": false, 00:20:06.223 "get_zone_info": false, 00:20:06.223 "zone_management": false, 00:20:06.223 "zone_append": false, 00:20:06.223 "compare": false, 00:20:06.223 "compare_and_write": false, 00:20:06.223 "abort": false, 00:20:06.223 "seek_hole": false, 00:20:06.223 "seek_data": false, 00:20:06.223 "copy": false, 00:20:06.223 "nvme_iov_md": false 00:20:06.223 }, 00:20:06.223 "driver_specific": { 00:20:06.223 "ftl": { 00:20:06.223 "base_bdev": "ef93d3b7-ec50-4e5c-936e-1721924a7675", 00:20:06.223 "cache": "nvc0n1p0" 00:20:06.223 } 00:20:06.223 } 00:20:06.223 } 00:20:06.223 ]' 00:20:06.223 14:13:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:06.481 14:13:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:06.481 14:13:08 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:06.481 [2024-12-09 14:13:08.204526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-09 14:13:08.204580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:06.481 [2024-12-09 14:13:08.204596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.481 [2024-12-09 14:13:08.204608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-09 14:13:08.204636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:06.481 [2024-12-09 14:13:08.207213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-09 14:13:08.207239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:06.481 [2024-12-09 14:13:08.207254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:20:06.481 [2024-12-09 14:13:08.207262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-09 14:13:08.207748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-09 14:13:08.207762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:06.481 [2024-12-09 14:13:08.207773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:20:06.481 [2024-12-09 14:13:08.207780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-09 14:13:08.211409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-09 14:13:08.211429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:06.481 [2024-12-09 14:13:08.211440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:20:06.481 [2024-12-09 14:13:08.211448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-09 14:13:08.218643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-09 14:13:08.218738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:06.481 [2024-12-09 14:13:08.218839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.140 ms 00:20:06.481 [2024-12-09 14:13:08.218865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-09 14:13:08.242346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-09 14:13:08.242481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:06.481 [2024-12-09 14:13:08.242555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.387 ms 00:20:06.481 [2024-12-09 14:13:08.242579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-09 14:13:08.256810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-09 14:13:08.256924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:06.481 [2024-12-09 14:13:08.257002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.121 ms 00:20:06.481 [2024-12-09 14:13:08.257029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-09 14:13:08.257271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-09 14:13:08.257443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:06.481 [2024-12-09 14:13:08.257497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:06.481 [2024-12-09 14:13:08.257523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.741 [2024-12-09 14:13:08.279969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.741 [2024-12-09 14:13:08.280071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:06.741 [2024-12-09 14:13:08.280127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.392 ms 00:20:06.741 [2024-12-09 14:13:08.280149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.741 [2024-12-09 14:13:08.302182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.741 [2024-12-09 14:13:08.302285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:06.741 [2024-12-09 14:13:08.302305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.864 ms 00:20:06.741 [2024-12-09 14:13:08.302312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.741 [2024-12-09 14:13:08.324199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.741 [2024-12-09 14:13:08.324297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:06.741 [2024-12-09 14:13:08.324354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.832 ms 00:20:06.741 [2024-12-09 14:13:08.324376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.741 [2024-12-09 14:13:08.346199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.741 [2024-12-09 14:13:08.346298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:06.741 [2024-12-09 14:13:08.346355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.712 ms 00:20:06.741 [2024-12-09 14:13:08.346378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.741 [2024-12-09 14:13:08.346438] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:06.741 [2024-12-09 14:13:08.346584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.346629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.346663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.346733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.346767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.346803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.346860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.346896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.346928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.346986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:06.741 [2024-12-09 14:13:08.347733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.347996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:06.742 [2024-12-09 14:13:08.348510] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:06.742 [2024-12-09 14:13:08.348521] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 34e99555-49f3-4d3d-b544-1318be7f7bb8 00:20:06.742 [2024-12-09 14:13:08.348530] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:06.742 [2024-12-09 14:13:08.348548] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:06.742 [2024-12-09 14:13:08.348555] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:06.742 [2024-12-09 14:13:08.348567] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:06.742 [2024-12-09 14:13:08.348573] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:06.742 [2024-12-09 14:13:08.348582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:06.742 [2024-12-09 14:13:08.348589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:06.742 [2024-12-09 14:13:08.348597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:06.742 [2024-12-09 14:13:08.348603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:06.742 [2024-12-09 14:13:08.348612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.742 [2024-12-09 14:13:08.348619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:06.742 [2024-12-09 14:13:08.348629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:20:06.742 [2024-12-09 14:13:08.348635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.742 [2024-12-09 14:13:08.360755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.742 [2024-12-09 14:13:08.360782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:06.742 [2024-12-09 14:13:08.360795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.076 ms 00:20:06.743 [2024-12-09 14:13:08.360804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.743 [2024-12-09 14:13:08.361166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.743 [2024-12-09 14:13:08.361179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:06.743 [2024-12-09 14:13:08.361189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:20:06.743 [2024-12-09 14:13:08.361196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.743 [2024-12-09 14:13:08.404401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.743 [2024-12-09 14:13:08.404439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.743 [2024-12-09 14:13:08.404451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.743 [2024-12-09 14:13:08.404459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.743 [2024-12-09 14:13:08.404579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.743 [2024-12-09 14:13:08.404593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.743 [2024-12-09 14:13:08.404602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.743 [2024-12-09 14:13:08.404610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.743 [2024-12-09 14:13:08.404670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.743 [2024-12-09 14:13:08.404683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.743 [2024-12-09 14:13:08.404696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.743 [2024-12-09 14:13:08.404704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.743 [2024-12-09 14:13:08.404726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.743 [2024-12-09 14:13:08.404734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.743 [2024-12-09 14:13:08.404742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.743 [2024-12-09 14:13:08.404750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.743 [2024-12-09 14:13:08.484127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.743 [2024-12-09 14:13:08.484173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.743 [2024-12-09 14:13:08.484186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.743 [2024-12-09 14:13:08.484195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.001 [2024-12-09 14:13:08.545332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.001 [2024-12-09 14:13:08.545371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.001 [2024-12-09 14:13:08.545383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.001 [2024-12-09 14:13:08.545390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.001 [2024-12-09 14:13:08.545478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.001 [2024-12-09 14:13:08.545487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:07.001 [2024-12-09 14:13:08.545499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.001 [2024-12-09 14:13:08.545509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.001 [2024-12-09 14:13:08.545566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.001 [2024-12-09 14:13:08.545575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:07.001 [2024-12-09 14:13:08.545584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.001 [2024-12-09 14:13:08.545591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.001 [2024-12-09 14:13:08.545692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.001 [2024-12-09 14:13:08.545705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:07.001 [2024-12-09 14:13:08.545715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.001 [2024-12-09 14:13:08.545724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.002 [2024-12-09 14:13:08.545769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.002 [2024-12-09 14:13:08.545781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:07.002 [2024-12-09 14:13:08.545790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.002 [2024-12-09 14:13:08.545797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.002 [2024-12-09 14:13:08.545841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.002 [2024-12-09 14:13:08.545853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:07.002 [2024-12-09 14:13:08.545864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.002 [2024-12-09 14:13:08.545871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.002 [2024-12-09 14:13:08.545921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.002 [2024-12-09 14:13:08.545934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:07.002 [2024-12-09 14:13:08.545944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.002 [2024-12-09 14:13:08.545951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.002 [2024-12-09 14:13:08.546122] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.581 ms, result 0 00:20:07.002 true 00:20:07.002 14:13:08 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76337 00:20:07.002 14:13:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76337 ']' 00:20:07.002 14:13:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76337 00:20:07.002 14:13:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:07.002 14:13:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.002 14:13:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76337 00:20:07.002 killing process with pid 76337 00:20:07.002 14:13:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.002 14:13:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.002 14:13:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76337' 00:20:07.002 14:13:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76337 00:20:07.002 14:13:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76337 00:20:10.285 14:13:11 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:11.668 65536+0 records in 00:20:11.668 65536+0 records out 00:20:11.668 268435456 bytes (268 MB, 256 MiB) copied, 1.10779 s, 242 MB/s 00:20:11.668 14:13:13 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:11.668 [2024-12-09 14:13:13.125101] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:11.668 [2024-12-09 14:13:13.126056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76548 ] 00:20:11.668 [2024-12-09 14:13:13.291035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.668 [2024-12-09 14:13:13.425657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.243 [2024-12-09 14:13:13.733742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:12.243 [2024-12-09 14:13:13.733833] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:12.243 [2024-12-09 14:13:13.897647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.897715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:12.243 [2024-12-09 14:13:13.897731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:12.243 [2024-12-09 14:13:13.897741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.900719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.900769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:12.243 [2024-12-09 14:13:13.900780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:20:12.243 [2024-12-09 14:13:13.900789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.900911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:12.243 [2024-12-09 14:13:13.901863] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:12.243 [2024-12-09 14:13:13.901911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.901922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:12.243 [2024-12-09 14:13:13.901932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:20:12.243 [2024-12-09 14:13:13.901941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.903728] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:12.243 [2024-12-09 14:13:13.918610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.918661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:12.243 [2024-12-09 14:13:13.918674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.881 ms 00:20:12.243 [2024-12-09 14:13:13.918683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.918803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.918815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:12.243 [2024-12-09 14:13:13.918826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:12.243 [2024-12-09 14:13:13.918834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.927132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.927177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:12.243 [2024-12-09 14:13:13.927188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.252 ms 00:20:12.243 [2024-12-09 14:13:13.927197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.927306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.927316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:12.243 [2024-12-09 14:13:13.927325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:12.243 [2024-12-09 14:13:13.927334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.927365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.927375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:12.243 [2024-12-09 14:13:13.927384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:12.243 [2024-12-09 14:13:13.927392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.927415] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:12.243 [2024-12-09 14:13:13.931546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.931584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:12.243 [2024-12-09 14:13:13.931595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.125 ms 00:20:12.243 [2024-12-09 14:13:13.931604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.931685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.931697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:12.243 [2024-12-09 14:13:13.931707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:12.243 [2024-12-09 14:13:13.931715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.931740] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:12.243 [2024-12-09 14:13:13.931762] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:12.243 [2024-12-09 14:13:13.931798] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:12.243 [2024-12-09 14:13:13.931816] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:12.243 [2024-12-09 14:13:13.931921] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:12.243 [2024-12-09 14:13:13.931932] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:12.243 [2024-12-09 14:13:13.931943] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:12.243 [2024-12-09 14:13:13.931956] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:12.243 [2024-12-09 14:13:13.931965] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:12.243 [2024-12-09 14:13:13.931975] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:12.243 [2024-12-09 14:13:13.931983] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:12.243 [2024-12-09 14:13:13.931991] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:12.243 [2024-12-09 14:13:13.931999] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:12.243 [2024-12-09 14:13:13.932007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.932015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:12.243 [2024-12-09 14:13:13.932023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:12.243 [2024-12-09 14:13:13.932031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.932129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.243 [2024-12-09 14:13:13.932147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:12.243 [2024-12-09 14:13:13.932161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:12.243 [2024-12-09 14:13:13.932171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.243 [2024-12-09 14:13:13.932278] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:12.243 [2024-12-09 14:13:13.932289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:12.243 [2024-12-09 14:13:13.932298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.243 [2024-12-09 14:13:13.932306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.243 [2024-12-09 14:13:13.932315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:12.243 [2024-12-09 14:13:13.932323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:12.243 [2024-12-09 14:13:13.932330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:12.243 [2024-12-09 14:13:13.932339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:12.243 [2024-12-09 14:13:13.932346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:12.243 [2024-12-09 14:13:13.932353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.243 [2024-12-09 14:13:13.932360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:12.243 [2024-12-09 14:13:13.932374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:12.243 [2024-12-09 14:13:13.932382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.243 [2024-12-09 14:13:13.932388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:12.243 [2024-12-09 14:13:13.932395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:12.243 [2024-12-09 14:13:13.932402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.243 [2024-12-09 14:13:13.932409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:12.243 [2024-12-09 14:13:13.932416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:12.243 [2024-12-09 14:13:13.932423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.243 [2024-12-09 14:13:13.932430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:12.243 [2024-12-09 14:13:13.932437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:12.243 [2024-12-09 14:13:13.932445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.243 [2024-12-09 14:13:13.932452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:12.243 [2024-12-09 14:13:13.932460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:12.243 [2024-12-09 14:13:13.932466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.243 [2024-12-09 14:13:13.932475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:12.243 [2024-12-09 14:13:13.932482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:12.243 [2024-12-09 14:13:13.932489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.243 [2024-12-09 14:13:13.932496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:12.243 [2024-12-09 14:13:13.932502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:12.244 [2024-12-09 14:13:13.932509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.244 [2024-12-09 14:13:13.932515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:12.244 [2024-12-09 14:13:13.932523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:12.244 [2024-12-09 14:13:13.932529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.244 [2024-12-09 14:13:13.932564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:12.244 [2024-12-09 14:13:13.932573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:12.244 [2024-12-09 14:13:13.932580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.244 [2024-12-09 14:13:13.932588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:12.244 [2024-12-09 14:13:13.932595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:12.244 [2024-12-09 14:13:13.932602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.244 [2024-12-09 14:13:13.932609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:12.244 [2024-12-09 14:13:13.932617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:12.244 [2024-12-09 14:13:13.932624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.244 [2024-12-09 14:13:13.932632] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:12.244 [2024-12-09 14:13:13.932640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:12.244 [2024-12-09 14:13:13.932651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.244 [2024-12-09 14:13:13.932659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.244 [2024-12-09 14:13:13.932667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:12.244 [2024-12-09 14:13:13.932675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:12.244 [2024-12-09 14:13:13.932682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:12.244 [2024-12-09 14:13:13.932690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:12.244 [2024-12-09 14:13:13.932697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:12.244 [2024-12-09 14:13:13.932704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:12.244 [2024-12-09 14:13:13.932713] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:12.244 [2024-12-09 14:13:13.932722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.244 [2024-12-09 14:13:13.932731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:12.244 [2024-12-09 14:13:13.932739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:12.244 [2024-12-09 14:13:13.932747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:12.244 [2024-12-09 14:13:13.932755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:12.244 [2024-12-09 14:13:13.932764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:12.244 [2024-12-09 14:13:13.932772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:12.244 [2024-12-09 14:13:13.932779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:12.244 [2024-12-09 14:13:13.932787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:12.244 [2024-12-09 14:13:13.932795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:12.244 [2024-12-09 14:13:13.932802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:12.244 [2024-12-09 14:13:13.932810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:12.244 [2024-12-09 14:13:13.932817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:12.244 [2024-12-09 14:13:13.932825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:12.244 [2024-12-09 14:13:13.932833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:12.244 [2024-12-09 14:13:13.932841] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:12.244 [2024-12-09 14:13:13.932849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.244 [2024-12-09 14:13:13.932859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:12.244 [2024-12-09 14:13:13.932879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:12.244 [2024-12-09 14:13:13.932886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:12.244 [2024-12-09 14:13:13.932894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:12.244 [2024-12-09 14:13:13.932901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.244 [2024-12-09 14:13:13.932913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:12.244 [2024-12-09 14:13:13.932921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:20:12.244 [2024-12-09 14:13:13.932929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.244 [2024-12-09 14:13:13.965626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.244 [2024-12-09 14:13:13.965857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:12.244 [2024-12-09 14:13:13.965877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.634 ms 00:20:12.244 [2024-12-09 14:13:13.965888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.244 [2024-12-09 14:13:13.966046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.244 [2024-12-09 14:13:13.966058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:12.244 [2024-12-09 14:13:13.966068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:12.244 [2024-12-09 14:13:13.966077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.244 [2024-12-09 14:13:14.012577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.244 [2024-12-09 14:13:14.012642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:12.244 [2024-12-09 14:13:14.012660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.473 ms 00:20:12.244 [2024-12-09 14:13:14.012669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.244 [2024-12-09 14:13:14.012809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.244 [2024-12-09 14:13:14.012822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:12.244 [2024-12-09 14:13:14.012832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:12.244 [2024-12-09 14:13:14.012841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.244 [2024-12-09 14:13:14.013446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.244 [2024-12-09 14:13:14.013489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:12.244 [2024-12-09 14:13:14.013510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:20:12.244 [2024-12-09 14:13:14.013518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.244 [2024-12-09 14:13:14.013705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.244 [2024-12-09 14:13:14.013736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:12.244 [2024-12-09 14:13:14.013746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:20:12.244 [2024-12-09 14:13:14.013754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.244 [2024-12-09 14:13:14.030508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.244 [2024-12-09 14:13:14.030578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:12.244 [2024-12-09 14:13:14.030592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.730 ms 00:20:12.244 [2024-12-09 14:13:14.030601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.045611] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:12.507 [2024-12-09 14:13:14.045665] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:12.507 [2024-12-09 14:13:14.045680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.045689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:12.507 [2024-12-09 14:13:14.045699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.945 ms 00:20:12.507 [2024-12-09 14:13:14.045707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.073473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.073763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:12.507 [2024-12-09 14:13:14.073789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.658 ms 00:20:12.507 [2024-12-09 14:13:14.073798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.087938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.087989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:12.507 [2024-12-09 14:13:14.088003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.018 ms 00:20:12.507 [2024-12-09 14:13:14.088012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.100653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.100831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:12.507 [2024-12-09 14:13:14.100855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.539 ms 00:20:12.507 [2024-12-09 14:13:14.100862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.101824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.102011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:12.507 [2024-12-09 14:13:14.102088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:20:12.507 [2024-12-09 14:13:14.102111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.166597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.166887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:12.507 [2024-12-09 14:13:14.166956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.435 ms 00:20:12.507 [2024-12-09 14:13:14.166968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.178844] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:12.507 [2024-12-09 14:13:14.200321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.200505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:12.507 [2024-12-09 14:13:14.200527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.238 ms 00:20:12.507 [2024-12-09 14:13:14.200552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.200676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.200689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:12.507 [2024-12-09 14:13:14.200699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:12.507 [2024-12-09 14:13:14.200707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.200767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.200777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:12.507 [2024-12-09 14:13:14.200787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:12.507 [2024-12-09 14:13:14.200795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.200829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.200841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:12.507 [2024-12-09 14:13:14.200849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:12.507 [2024-12-09 14:13:14.200858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.200898] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:12.507 [2024-12-09 14:13:14.200910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.200919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:12.507 [2024-12-09 14:13:14.200928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:12.507 [2024-12-09 14:13:14.200936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.226532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.226591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:12.507 [2024-12-09 14:13:14.226606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.574 ms 00:20:12.507 [2024-12-09 14:13:14.226615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.226734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.507 [2024-12-09 14:13:14.226747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:12.507 [2024-12-09 14:13:14.226757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:12.507 [2024-12-09 14:13:14.226766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.507 [2024-12-09 14:13:14.228408] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:12.507 [2024-12-09 14:13:14.232057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.441 ms, result 0 00:20:12.507 [2024-12-09 14:13:14.232853] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:12.507 [2024-12-09 14:13:14.246293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:13.898  [2024-12-09T14:13:16.261Z] Copying: 22/256 [MB] (22 MBps) [2024-12-09T14:13:17.649Z] Copying: 46/256 [MB] (24 MBps) [2024-12-09T14:13:18.591Z] Copying: 65/256 [MB] (19 MBps) [2024-12-09T14:13:19.535Z] Copying: 85/256 [MB] (19 MBps) [2024-12-09T14:13:20.477Z] Copying: 101/256 [MB] (16 MBps) [2024-12-09T14:13:21.476Z] Copying: 120/256 [MB] (19 MBps) [2024-12-09T14:13:22.415Z] Copying: 137/256 [MB] (16 MBps) [2024-12-09T14:13:23.358Z] Copying: 157/256 [MB] (20 MBps) [2024-12-09T14:13:24.302Z] Copying: 180/256 [MB] (22 MBps) [2024-12-09T14:13:25.687Z] Copying: 190/256 [MB] (10 MBps) [2024-12-09T14:13:26.259Z] Copying: 205224/262144 [kB] (10124 kBps) [2024-12-09T14:13:27.642Z] Copying: 215400/262144 [kB] (10176 kBps) [2024-12-09T14:13:28.585Z] Copying: 233/256 [MB] (22 MBps) [2024-12-09T14:13:29.160Z] Copying: 247/256 [MB] (14 MBps) [2024-12-09T14:13:29.160Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-09 14:13:28.916739] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:27.366 [2024-12-09 14:13:28.927257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:28.927467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:27.366 [2024-12-09 14:13:28.927491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:27.366 [2024-12-09 14:13:28.927510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:28.927575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:27.366 [2024-12-09 14:13:28.930623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:28.930665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:27.366 [2024-12-09 14:13:28.930677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.031 ms 00:20:27.366 [2024-12-09 14:13:28.930687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:28.933486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:28.933551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:27.366 [2024-12-09 14:13:28.933563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:20:27.366 [2024-12-09 14:13:28.933571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:28.941959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:28.942015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:27.366 [2024-12-09 14:13:28.942026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.368 ms 00:20:27.366 [2024-12-09 14:13:28.942034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:28.949010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:28.949200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:27.366 [2024-12-09 14:13:28.949245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.928 ms 00:20:27.366 [2024-12-09 14:13:28.949253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:28.974958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:28.975008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:27.366 [2024-12-09 14:13:28.975021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.635 ms 00:20:27.366 [2024-12-09 14:13:28.975029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:28.991451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:28.991509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:27.366 [2024-12-09 14:13:28.991527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.368 ms 00:20:27.366 [2024-12-09 14:13:28.991557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:28.991735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:28.991747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:27.366 [2024-12-09 14:13:28.991757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:27.366 [2024-12-09 14:13:28.991773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:29.018281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:29.018331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:27.366 [2024-12-09 14:13:29.018343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.490 ms 00:20:27.366 [2024-12-09 14:13:29.018352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:29.044118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:29.044168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:27.366 [2024-12-09 14:13:29.044181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.699 ms 00:20:27.366 [2024-12-09 14:13:29.044188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:29.069400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:29.069450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:27.366 [2024-12-09 14:13:29.069461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.159 ms 00:20:27.366 [2024-12-09 14:13:29.069468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:29.094810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.366 [2024-12-09 14:13:29.094859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:27.366 [2024-12-09 14:13:29.094871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.232 ms 00:20:27.366 [2024-12-09 14:13:29.094878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.366 [2024-12-09 14:13:29.094929] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:27.366 [2024-12-09 14:13:29.094944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.094955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.094963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.094971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.094978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.094986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.094994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:27.366 [2024-12-09 14:13:29.095183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:27.367 [2024-12-09 14:13:29.095749] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:27.367 [2024-12-09 14:13:29.095758] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 34e99555-49f3-4d3d-b544-1318be7f7bb8 00:20:27.367 [2024-12-09 14:13:29.095766] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:27.367 [2024-12-09 14:13:29.095773] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:27.367 [2024-12-09 14:13:29.095781] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:27.367 [2024-12-09 14:13:29.095790] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:27.367 [2024-12-09 14:13:29.095797] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:27.367 [2024-12-09 14:13:29.095806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:27.367 [2024-12-09 14:13:29.095814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:27.367 [2024-12-09 14:13:29.095820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:27.367 [2024-12-09 14:13:29.095827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:27.367 [2024-12-09 14:13:29.095834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.367 [2024-12-09 14:13:29.095845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:27.367 [2024-12-09 14:13:29.095855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:20:27.367 [2024-12-09 14:13:29.095863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.367 [2024-12-09 14:13:29.109623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.367 [2024-12-09 14:13:29.109664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:27.367 [2024-12-09 14:13:29.109677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.725 ms 00:20:27.367 [2024-12-09 14:13:29.109685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.367 [2024-12-09 14:13:29.110092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.367 [2024-12-09 14:13:29.110103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:27.367 [2024-12-09 14:13:29.110111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:27.367 [2024-12-09 14:13:29.110119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.367 [2024-12-09 14:13:29.149247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.367 [2024-12-09 14:13:29.149455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.367 [2024-12-09 14:13:29.149478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.368 [2024-12-09 14:13:29.149487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.368 [2024-12-09 14:13:29.149624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.368 [2024-12-09 14:13:29.149636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.368 [2024-12-09 14:13:29.149646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.368 [2024-12-09 14:13:29.149653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.368 [2024-12-09 14:13:29.149711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.368 [2024-12-09 14:13:29.149721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.368 [2024-12-09 14:13:29.149729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.368 [2024-12-09 14:13:29.149736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.368 [2024-12-09 14:13:29.149754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.368 [2024-12-09 14:13:29.149765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.368 [2024-12-09 14:13:29.149774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.368 [2024-12-09 14:13:29.149782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.629 [2024-12-09 14:13:29.233929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.629 [2024-12-09 14:13:29.233991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.629 [2024-12-09 14:13:29.234005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.629 [2024-12-09 14:13:29.234014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.629 [2024-12-09 14:13:29.302960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.629 [2024-12-09 14:13:29.303018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.629 [2024-12-09 14:13:29.303030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.629 [2024-12-09 14:13:29.303039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.629 [2024-12-09 14:13:29.303118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.629 [2024-12-09 14:13:29.303130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.629 [2024-12-09 14:13:29.303139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.629 [2024-12-09 14:13:29.303148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.629 [2024-12-09 14:13:29.303181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.629 [2024-12-09 14:13:29.303190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.629 [2024-12-09 14:13:29.303204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.629 [2024-12-09 14:13:29.303212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.629 [2024-12-09 14:13:29.303312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.629 [2024-12-09 14:13:29.303323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.629 [2024-12-09 14:13:29.303331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.629 [2024-12-09 14:13:29.303340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.629 [2024-12-09 14:13:29.303375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.629 [2024-12-09 14:13:29.303385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:27.629 [2024-12-09 14:13:29.303393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.629 [2024-12-09 14:13:29.303406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.629 [2024-12-09 14:13:29.303452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.629 [2024-12-09 14:13:29.303462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.629 [2024-12-09 14:13:29.303471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.629 [2024-12-09 14:13:29.303479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.629 [2024-12-09 14:13:29.303528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.629 [2024-12-09 14:13:29.303574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.629 [2024-12-09 14:13:29.303586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.629 [2024-12-09 14:13:29.303595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.629 [2024-12-09 14:13:29.303775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.522 ms, result 0 00:20:28.571 00:20:28.571 00:20:28.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.832 14:13:30 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76737 00:20:28.832 14:13:30 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76737 00:20:28.832 14:13:30 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:28.832 14:13:30 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76737 ']' 00:20:28.832 14:13:30 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.832 14:13:30 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.832 14:13:30 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.832 14:13:30 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.832 14:13:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:28.832 [2024-12-09 14:13:30.457042] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:28.832 [2024-12-09 14:13:30.457158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76737 ] 00:20:28.832 [2024-12-09 14:13:30.619095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.093 [2024-12-09 14:13:30.716360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.662 14:13:31 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.662 14:13:31 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:29.662 14:13:31 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:29.923 [2024-12-09 14:13:31.484234] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.923 [2024-12-09 14:13:31.484296] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.923 [2024-12-09 14:13:31.661175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.661256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:29.923 [2024-12-09 14:13:31.661275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.923 [2024-12-09 14:13:31.661284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.664338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.664573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.923 [2024-12-09 14:13:31.664600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:20:29.923 [2024-12-09 14:13:31.664610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.664846] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:29.923 [2024-12-09 14:13:31.665784] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:29.923 [2024-12-09 14:13:31.665843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.665852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.923 [2024-12-09 14:13:31.665864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:20:29.923 [2024-12-09 14:13:31.665873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.667694] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:29.923 [2024-12-09 14:13:31.682105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.682164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:29.923 [2024-12-09 14:13:31.682178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.414 ms 00:20:29.923 [2024-12-09 14:13:31.682189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.682315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.682329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:29.923 [2024-12-09 14:13:31.682339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:29.923 [2024-12-09 14:13:31.682350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.690884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.690938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.923 [2024-12-09 14:13:31.690949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.480 ms 00:20:29.923 [2024-12-09 14:13:31.690959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.691084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.691098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.923 [2024-12-09 14:13:31.691107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:29.923 [2024-12-09 14:13:31.691121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.691150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.691160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:29.923 [2024-12-09 14:13:31.691168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:29.923 [2024-12-09 14:13:31.691177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.691201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:29.923 [2024-12-09 14:13:31.695639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.695682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.923 [2024-12-09 14:13:31.695696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.441 ms 00:20:29.923 [2024-12-09 14:13:31.695705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.695792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.695802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:29.923 [2024-12-09 14:13:31.695815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:29.923 [2024-12-09 14:13:31.695825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.695848] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:29.923 [2024-12-09 14:13:31.695872] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:29.923 [2024-12-09 14:13:31.695920] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:29.923 [2024-12-09 14:13:31.695936] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:29.923 [2024-12-09 14:13:31.696046] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:29.923 [2024-12-09 14:13:31.696059] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:29.923 [2024-12-09 14:13:31.696074] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:29.923 [2024-12-09 14:13:31.696085] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:29.923 [2024-12-09 14:13:31.696096] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:29.923 [2024-12-09 14:13:31.696105] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:29.923 [2024-12-09 14:13:31.696114] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:29.923 [2024-12-09 14:13:31.696124] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:29.923 [2024-12-09 14:13:31.696136] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:29.923 [2024-12-09 14:13:31.696144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.696154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:29.923 [2024-12-09 14:13:31.696162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:20:29.923 [2024-12-09 14:13:31.696172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.696261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.923 [2024-12-09 14:13:31.696271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:29.923 [2024-12-09 14:13:31.696280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:29.923 [2024-12-09 14:13:31.696289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.923 [2024-12-09 14:13:31.696389] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:29.923 [2024-12-09 14:13:31.696408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:29.923 [2024-12-09 14:13:31.696416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.923 [2024-12-09 14:13:31.696426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.923 [2024-12-09 14:13:31.696434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:29.923 [2024-12-09 14:13:31.696446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:29.923 [2024-12-09 14:13:31.696452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:29.923 [2024-12-09 14:13:31.696463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:29.923 [2024-12-09 14:13:31.696470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:29.923 [2024-12-09 14:13:31.696497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.923 [2024-12-09 14:13:31.696504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:29.923 [2024-12-09 14:13:31.696512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:29.923 [2024-12-09 14:13:31.696519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.923 [2024-12-09 14:13:31.696527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:29.923 [2024-12-09 14:13:31.696561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:29.923 [2024-12-09 14:13:31.696572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.923 [2024-12-09 14:13:31.696579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:29.923 [2024-12-09 14:13:31.696589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:29.923 [2024-12-09 14:13:31.696603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.923 [2024-12-09 14:13:31.696613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:29.923 [2024-12-09 14:13:31.696620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:29.923 [2024-12-09 14:13:31.696629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.923 [2024-12-09 14:13:31.696636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:29.923 [2024-12-09 14:13:31.696647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:29.923 [2024-12-09 14:13:31.696654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.923 [2024-12-09 14:13:31.696663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:29.923 [2024-12-09 14:13:31.696669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:29.923 [2024-12-09 14:13:31.696678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.924 [2024-12-09 14:13:31.696685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:29.924 [2024-12-09 14:13:31.696696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:29.924 [2024-12-09 14:13:31.696703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.924 [2024-12-09 14:13:31.696712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:29.924 [2024-12-09 14:13:31.696718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:29.924 [2024-12-09 14:13:31.696727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.924 [2024-12-09 14:13:31.696734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:29.924 [2024-12-09 14:13:31.696742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:29.924 [2024-12-09 14:13:31.696748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.924 [2024-12-09 14:13:31.696758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:29.924 [2024-12-09 14:13:31.696764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:29.924 [2024-12-09 14:13:31.696775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.924 [2024-12-09 14:13:31.696781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:29.924 [2024-12-09 14:13:31.696789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:29.924 [2024-12-09 14:13:31.696796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.924 [2024-12-09 14:13:31.696805] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:29.924 [2024-12-09 14:13:31.696815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:29.924 [2024-12-09 14:13:31.696824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.924 [2024-12-09 14:13:31.696831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.924 [2024-12-09 14:13:31.696841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:29.924 [2024-12-09 14:13:31.696848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:29.924 [2024-12-09 14:13:31.696857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:29.924 [2024-12-09 14:13:31.696865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:29.924 [2024-12-09 14:13:31.696874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:29.924 [2024-12-09 14:13:31.696881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:29.924 [2024-12-09 14:13:31.696892] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:29.924 [2024-12-09 14:13:31.696902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.924 [2024-12-09 14:13:31.696916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:29.924 [2024-12-09 14:13:31.696927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:29.924 [2024-12-09 14:13:31.696937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:29.924 [2024-12-09 14:13:31.696944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:29.924 [2024-12-09 14:13:31.696953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:29.924 [2024-12-09 14:13:31.696960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:29.924 [2024-12-09 14:13:31.696970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:29.924 [2024-12-09 14:13:31.696976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:29.924 [2024-12-09 14:13:31.696985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:29.924 [2024-12-09 14:13:31.696993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:29.924 [2024-12-09 14:13:31.697002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:29.924 [2024-12-09 14:13:31.697009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:29.924 [2024-12-09 14:13:31.697018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:29.924 [2024-12-09 14:13:31.697027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:29.924 [2024-12-09 14:13:31.697036] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:29.924 [2024-12-09 14:13:31.697045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.924 [2024-12-09 14:13:31.697057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:29.924 [2024-12-09 14:13:31.697064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:29.924 [2024-12-09 14:13:31.697073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:29.924 [2024-12-09 14:13:31.697080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:29.924 [2024-12-09 14:13:31.697090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.924 [2024-12-09 14:13:31.697098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:29.924 [2024-12-09 14:13:31.697108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:20:29.924 [2024-12-09 14:13:31.697117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.730711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.730762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.186 [2024-12-09 14:13:31.730778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.529 ms 00:20:30.186 [2024-12-09 14:13:31.730791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.730936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.730947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:30.186 [2024-12-09 14:13:31.730959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:30.186 [2024-12-09 14:13:31.730967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.766297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.766512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:30.186 [2024-12-09 14:13:31.766728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.303 ms 00:20:30.186 [2024-12-09 14:13:31.766762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.766871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.766882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:30.186 [2024-12-09 14:13:31.766894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:30.186 [2024-12-09 14:13:31.766902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.767410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.767453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:30.186 [2024-12-09 14:13:31.767465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:20:30.186 [2024-12-09 14:13:31.767472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.767639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.767650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:30.186 [2024-12-09 14:13:31.767660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:20:30.186 [2024-12-09 14:13:31.767668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.785913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.786089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:30.186 [2024-12-09 14:13:31.786111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.218 ms 00:20:30.186 [2024-12-09 14:13:31.786120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.809279] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:30.186 [2024-12-09 14:13:31.809339] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:30.186 [2024-12-09 14:13:31.809360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.809371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:30.186 [2024-12-09 14:13:31.809386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.113 ms 00:20:30.186 [2024-12-09 14:13:31.809402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.835990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.836060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:30.186 [2024-12-09 14:13:31.836078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.465 ms 00:20:30.186 [2024-12-09 14:13:31.836086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.849115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.849166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:30.186 [2024-12-09 14:13:31.849184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.920 ms 00:20:30.186 [2024-12-09 14:13:31.849192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.862489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.862555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:30.186 [2024-12-09 14:13:31.862571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.173 ms 00:20:30.186 [2024-12-09 14:13:31.862578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.186 [2024-12-09 14:13:31.863245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.186 [2024-12-09 14:13:31.863271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:30.186 [2024-12-09 14:13:31.863284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:20:30.186 [2024-12-09 14:13:31.863291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.187 [2024-12-09 14:13:31.929711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.187 [2024-12-09 14:13:31.929774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:30.187 [2024-12-09 14:13:31.929795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.388 ms 00:20:30.187 [2024-12-09 14:13:31.929804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.187 [2024-12-09 14:13:31.941442] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:30.187 [2024-12-09 14:13:31.961135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.187 [2024-12-09 14:13:31.961215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:30.187 [2024-12-09 14:13:31.961228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.219 ms 00:20:30.187 [2024-12-09 14:13:31.961239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.187 [2024-12-09 14:13:31.961338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.187 [2024-12-09 14:13:31.961352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:30.187 [2024-12-09 14:13:31.961362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:30.187 [2024-12-09 14:13:31.961372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.187 [2024-12-09 14:13:31.961431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.187 [2024-12-09 14:13:31.961443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:30.187 [2024-12-09 14:13:31.961454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:30.187 [2024-12-09 14:13:31.961467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.187 [2024-12-09 14:13:31.961494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.187 [2024-12-09 14:13:31.961505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:30.187 [2024-12-09 14:13:31.961519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:30.187 [2024-12-09 14:13:31.961529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.187 [2024-12-09 14:13:31.961597] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:30.187 [2024-12-09 14:13:31.961616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.187 [2024-12-09 14:13:31.961624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:30.187 [2024-12-09 14:13:31.961634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:30.187 [2024-12-09 14:13:31.961645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.447 [2024-12-09 14:13:31.988403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.447 [2024-12-09 14:13:31.988642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:30.447 [2024-12-09 14:13:31.988675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.726 ms 00:20:30.447 [2024-12-09 14:13:31.988685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.447 [2024-12-09 14:13:31.988817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.447 [2024-12-09 14:13:31.988830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:30.447 [2024-12-09 14:13:31.988845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:30.447 [2024-12-09 14:13:31.988853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.448 [2024-12-09 14:13:31.989989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:30.448 [2024-12-09 14:13:31.993709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.480 ms, result 0 00:20:30.448 [2024-12-09 14:13:31.995638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:30.448 Some configs were skipped because the RPC state that can call them passed over. 00:20:30.448 14:13:32 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:30.708 [2024-12-09 14:13:32.248764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.709 [2024-12-09 14:13:32.248986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:30.709 [2024-12-09 14:13:32.249057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.122 ms 00:20:30.709 [2024-12-09 14:13:32.249085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.709 [2024-12-09 14:13:32.249150] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.510 ms, result 0 00:20:30.709 true 00:20:30.709 14:13:32 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:30.709 [2024-12-09 14:13:32.464445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.709 [2024-12-09 14:13:32.464510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:30.709 [2024-12-09 14:13:32.464526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:20:30.709 [2024-12-09 14:13:32.464534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.709 [2024-12-09 14:13:32.464599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.704 ms, result 0 00:20:30.709 true 00:20:30.709 14:13:32 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76737 00:20:30.709 14:13:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76737 ']' 00:20:30.709 14:13:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76737 00:20:30.709 14:13:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:30.709 14:13:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.709 14:13:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76737 00:20:30.969 killing process with pid 76737 00:20:30.969 14:13:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.969 14:13:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.969 14:13:32 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76737' 00:20:30.969 14:13:32 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76737 00:20:30.969 14:13:32 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76737 00:20:31.540 [2024-12-09 14:13:33.271997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.540 [2024-12-09 14:13:33.272073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:31.540 [2024-12-09 14:13:33.272089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.540 [2024-12-09 14:13:33.272102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.540 [2024-12-09 14:13:33.272126] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:31.540 [2024-12-09 14:13:33.275234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.540 [2024-12-09 14:13:33.275442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:31.541 [2024-12-09 14:13:33.275477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.085 ms 00:20:31.541 [2024-12-09 14:13:33.275486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-12-09 14:13:33.275850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-12-09 14:13:33.275863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:31.541 [2024-12-09 14:13:33.275875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:31.541 [2024-12-09 14:13:33.275883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-12-09 14:13:33.280628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-12-09 14:13:33.280673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:31.541 [2024-12-09 14:13:33.280687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:20:31.541 [2024-12-09 14:13:33.280695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-12-09 14:13:33.287682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-12-09 14:13:33.287744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:31.541 [2024-12-09 14:13:33.287763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.933 ms 00:20:31.541 [2024-12-09 14:13:33.287771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-12-09 14:13:33.299427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-12-09 14:13:33.299487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:31.541 [2024-12-09 14:13:33.299503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.578 ms 00:20:31.541 [2024-12-09 14:13:33.299513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-12-09 14:13:33.308305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-12-09 14:13:33.308358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:31.541 [2024-12-09 14:13:33.308372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.710 ms 00:20:31.541 [2024-12-09 14:13:33.308381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-12-09 14:13:33.308575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-12-09 14:13:33.308588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:31.541 [2024-12-09 14:13:33.308601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:31.541 [2024-12-09 14:13:33.308609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-12-09 14:13:33.320473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-12-09 14:13:33.320520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:31.541 [2024-12-09 14:13:33.320550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.836 ms 00:20:31.541 [2024-12-09 14:13:33.320557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-12-09 14:13:33.331892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-12-09 14:13:33.331942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:31.541 [2024-12-09 14:13:33.331961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.275 ms 00:20:31.541 [2024-12-09 14:13:33.331968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.801 [2024-12-09 14:13:33.342384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.801 [2024-12-09 14:13:33.342603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:31.801 [2024-12-09 14:13:33.342632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.355 ms 00:20:31.801 [2024-12-09 14:13:33.342640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.801 [2024-12-09 14:13:33.352944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.801 [2024-12-09 14:13:33.352991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:31.801 [2024-12-09 14:13:33.353004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.219 ms 00:20:31.801 [2024-12-09 14:13:33.353011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.801 [2024-12-09 14:13:33.353063] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:31.801 [2024-12-09 14:13:33.353078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:31.801 [2024-12-09 14:13:33.353094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:31.801 [2024-12-09 14:13:33.353102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:31.802 [2024-12-09 14:13:33.353944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.353953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.353961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.353970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.353978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.353989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.353997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.354006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.354014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.354023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.354032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.354042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:31.803 [2024-12-09 14:13:33.354065] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:31.803 [2024-12-09 14:13:33.354080] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 34e99555-49f3-4d3d-b544-1318be7f7bb8 00:20:31.803 [2024-12-09 14:13:33.354089] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:31.803 [2024-12-09 14:13:33.354099] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:31.803 [2024-12-09 14:13:33.354107] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:31.803 [2024-12-09 14:13:33.354118] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:31.803 [2024-12-09 14:13:33.354125] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:31.803 [2024-12-09 14:13:33.354136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:31.803 [2024-12-09 14:13:33.354143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:31.803 [2024-12-09 14:13:33.354152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:31.803 [2024-12-09 14:13:33.354159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:31.803 [2024-12-09 14:13:33.354168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.803 [2024-12-09 14:13:33.354176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:31.803 [2024-12-09 14:13:33.354187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:20:31.803 [2024-12-09 14:13:33.354196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.367843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.803 [2024-12-09 14:13:33.367888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:31.803 [2024-12-09 14:13:33.367905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.597 ms 00:20:31.803 [2024-12-09 14:13:33.367913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.368352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.803 [2024-12-09 14:13:33.368374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:31.803 [2024-12-09 14:13:33.368386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:20:31.803 [2024-12-09 14:13:33.368393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.417883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.417935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.803 [2024-12-09 14:13:33.417950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.417959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.418059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.418073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.803 [2024-12-09 14:13:33.418084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.418092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.418146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.418156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.803 [2024-12-09 14:13:33.418168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.418176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.418198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.418207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.803 [2024-12-09 14:13:33.418220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.418228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.492927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.493136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.803 [2024-12-09 14:13:33.493160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.493168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.544415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.544545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.803 [2024-12-09 14:13:33.544598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.544616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.544682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.544701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.803 [2024-12-09 14:13:33.544712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.544719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.544743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.544750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.803 [2024-12-09 14:13:33.544757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.544763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.544842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.544850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.803 [2024-12-09 14:13:33.544857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.544863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.544891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.544898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:31.803 [2024-12-09 14:13:33.544905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.544911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.544946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.544952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.803 [2024-12-09 14:13:33.544961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.544967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.545002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.803 [2024-12-09 14:13:33.545010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.803 [2024-12-09 14:13:33.545018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.803 [2024-12-09 14:13:33.545023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.803 [2024-12-09 14:13:33.545133] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 273.129 ms, result 0 00:20:32.371 14:13:34 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:32.371 14:13:34 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:32.371 [2024-12-09 14:13:34.123524] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:32.371 [2024-12-09 14:13:34.123664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76790 ] 00:20:32.630 [2024-12-09 14:13:34.276444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.630 [2024-12-09 14:13:34.357736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.888 [2024-12-09 14:13:34.567799] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:32.888 [2024-12-09 14:13:34.567991] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:33.153 [2024-12-09 14:13:34.719673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.153 [2024-12-09 14:13:34.719708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:33.153 [2024-12-09 14:13:34.719719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:33.153 [2024-12-09 14:13:34.719725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.153 [2024-12-09 14:13:34.721802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.153 [2024-12-09 14:13:34.721829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.153 [2024-12-09 14:13:34.721837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:20:33.153 [2024-12-09 14:13:34.721842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.153 [2024-12-09 14:13:34.721898] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:33.153 [2024-12-09 14:13:34.722478] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:33.153 [2024-12-09 14:13:34.722498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.153 [2024-12-09 14:13:34.722504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.153 [2024-12-09 14:13:34.722511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:20:33.153 [2024-12-09 14:13:34.722517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.153 [2024-12-09 14:13:34.723583] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:33.153 [2024-12-09 14:13:34.733289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.153 [2024-12-09 14:13:34.733316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:33.153 [2024-12-09 14:13:34.733324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.708 ms 00:20:33.153 [2024-12-09 14:13:34.733330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.153 [2024-12-09 14:13:34.733403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.153 [2024-12-09 14:13:34.733411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:33.153 [2024-12-09 14:13:34.733418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:33.153 [2024-12-09 14:13:34.733423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.153 [2024-12-09 14:13:34.737815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.153 [2024-12-09 14:13:34.737840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.153 [2024-12-09 14:13:34.737847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.362 ms 00:20:33.153 [2024-12-09 14:13:34.737853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.153 [2024-12-09 14:13:34.737930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.153 [2024-12-09 14:13:34.737938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.153 [2024-12-09 14:13:34.737944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:33.153 [2024-12-09 14:13:34.737949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.153 [2024-12-09 14:13:34.737967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.153 [2024-12-09 14:13:34.737974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:33.153 [2024-12-09 14:13:34.737979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:33.153 [2024-12-09 14:13:34.737984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.153 [2024-12-09 14:13:34.738002] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:33.153 [2024-12-09 14:13:34.740783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.153 [2024-12-09 14:13:34.740805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.153 [2024-12-09 14:13:34.740813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.785 ms 00:20:33.153 [2024-12-09 14:13:34.740818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.153 [2024-12-09 14:13:34.740848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.153 [2024-12-09 14:13:34.740855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:33.153 [2024-12-09 14:13:34.740861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:33.153 [2024-12-09 14:13:34.740867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.153 [2024-12-09 14:13:34.740882] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:33.153 [2024-12-09 14:13:34.740897] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:33.153 [2024-12-09 14:13:34.740922] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:33.153 [2024-12-09 14:13:34.740933] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:33.153 [2024-12-09 14:13:34.741012] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:33.153 [2024-12-09 14:13:34.741020] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:33.153 [2024-12-09 14:13:34.741029] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:33.153 [2024-12-09 14:13:34.741038] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:33.153 [2024-12-09 14:13:34.741045] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:33.153 [2024-12-09 14:13:34.741051] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:33.153 [2024-12-09 14:13:34.741056] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:33.154 [2024-12-09 14:13:34.741062] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:33.154 [2024-12-09 14:13:34.741068] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:33.154 [2024-12-09 14:13:34.741074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.154 [2024-12-09 14:13:34.741080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:33.154 [2024-12-09 14:13:34.741085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:33.154 [2024-12-09 14:13:34.741090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.154 [2024-12-09 14:13:34.741157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.154 [2024-12-09 14:13:34.741165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:33.154 [2024-12-09 14:13:34.741170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:33.154 [2024-12-09 14:13:34.741176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.154 [2024-12-09 14:13:34.741257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:33.154 [2024-12-09 14:13:34.741264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:33.154 [2024-12-09 14:13:34.741270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.154 [2024-12-09 14:13:34.741276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:33.154 [2024-12-09 14:13:34.741287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:33.154 [2024-12-09 14:13:34.741297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:33.154 [2024-12-09 14:13:34.741303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.154 [2024-12-09 14:13:34.741313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:33.154 [2024-12-09 14:13:34.741323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:33.154 [2024-12-09 14:13:34.741328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.154 [2024-12-09 14:13:34.741332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:33.154 [2024-12-09 14:13:34.741337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:33.154 [2024-12-09 14:13:34.741344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:33.154 [2024-12-09 14:13:34.741354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:33.154 [2024-12-09 14:13:34.741358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:33.154 [2024-12-09 14:13:34.741369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.154 [2024-12-09 14:13:34.741378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:33.154 [2024-12-09 14:13:34.741383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.154 [2024-12-09 14:13:34.741393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:33.154 [2024-12-09 14:13:34.741398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.154 [2024-12-09 14:13:34.741409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:33.154 [2024-12-09 14:13:34.741414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.154 [2024-12-09 14:13:34.741424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:33.154 [2024-12-09 14:13:34.741428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.154 [2024-12-09 14:13:34.741438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:33.154 [2024-12-09 14:13:34.741443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:33.154 [2024-12-09 14:13:34.741448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.154 [2024-12-09 14:13:34.741453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:33.154 [2024-12-09 14:13:34.741458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:33.154 [2024-12-09 14:13:34.741463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:33.154 [2024-12-09 14:13:34.741473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:33.154 [2024-12-09 14:13:34.741478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741483] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:33.154 [2024-12-09 14:13:34.741489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:33.154 [2024-12-09 14:13:34.741496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.154 [2024-12-09 14:13:34.741501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.154 [2024-12-09 14:13:34.741507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:33.154 [2024-12-09 14:13:34.741513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:33.154 [2024-12-09 14:13:34.741518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:33.154 [2024-12-09 14:13:34.741523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:33.154 [2024-12-09 14:13:34.741528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:33.154 [2024-12-09 14:13:34.741533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:33.154 [2024-12-09 14:13:34.741554] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:33.154 [2024-12-09 14:13:34.741561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.154 [2024-12-09 14:13:34.741567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:33.154 [2024-12-09 14:13:34.741573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:33.154 [2024-12-09 14:13:34.741578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:33.154 [2024-12-09 14:13:34.741584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:33.154 [2024-12-09 14:13:34.741590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:33.154 [2024-12-09 14:13:34.741602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:33.154 [2024-12-09 14:13:34.741608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:33.154 [2024-12-09 14:13:34.741613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:33.154 [2024-12-09 14:13:34.741619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:33.154 [2024-12-09 14:13:34.741625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:33.154 [2024-12-09 14:13:34.741635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:33.154 [2024-12-09 14:13:34.741641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:33.154 [2024-12-09 14:13:34.741646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:33.154 [2024-12-09 14:13:34.741651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:33.154 [2024-12-09 14:13:34.741657] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:33.154 [2024-12-09 14:13:34.741663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.154 [2024-12-09 14:13:34.741668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:33.154 [2024-12-09 14:13:34.741674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:33.154 [2024-12-09 14:13:34.741679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:33.154 [2024-12-09 14:13:34.741684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:33.154 [2024-12-09 14:13:34.741690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.154 [2024-12-09 14:13:34.741697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:33.154 [2024-12-09 14:13:34.741703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:20:33.154 [2024-12-09 14:13:34.741709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.154 [2024-12-09 14:13:34.762485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.154 [2024-12-09 14:13:34.762514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.154 [2024-12-09 14:13:34.762522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.736 ms 00:20:33.154 [2024-12-09 14:13:34.762527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.154 [2024-12-09 14:13:34.762636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.154 [2024-12-09 14:13:34.762644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:33.154 [2024-12-09 14:13:34.762651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:33.154 [2024-12-09 14:13:34.762657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.154 [2024-12-09 14:13:34.800959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.154 [2024-12-09 14:13:34.801083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.154 [2024-12-09 14:13:34.801101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.285 ms 00:20:33.154 [2024-12-09 14:13:34.801107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.801168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.801178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.155 [2024-12-09 14:13:34.801184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:33.155 [2024-12-09 14:13:34.801190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.801488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.801500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.155 [2024-12-09 14:13:34.801507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:33.155 [2024-12-09 14:13:34.801517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.801631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.801640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.155 [2024-12-09 14:13:34.801646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:33.155 [2024-12-09 14:13:34.801652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.812353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.812462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.155 [2024-12-09 14:13:34.812474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.686 ms 00:20:33.155 [2024-12-09 14:13:34.812480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.822442] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:33.155 [2024-12-09 14:13:34.822550] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:33.155 [2024-12-09 14:13:34.822604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.822621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:33.155 [2024-12-09 14:13:34.822638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.016 ms 00:20:33.155 [2024-12-09 14:13:34.822652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.841355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.841445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:33.155 [2024-12-09 14:13:34.841488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.572 ms 00:20:33.155 [2024-12-09 14:13:34.841506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.850475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.850578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:33.155 [2024-12-09 14:13:34.850621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.901 ms 00:20:33.155 [2024-12-09 14:13:34.850638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.859422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.859507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:33.155 [2024-12-09 14:13:34.859555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.737 ms 00:20:33.155 [2024-12-09 14:13:34.859571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.860036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.860115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:33.155 [2024-12-09 14:13:34.860154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:20:33.155 [2024-12-09 14:13:34.860172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.903952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.904093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:33.155 [2024-12-09 14:13:34.904135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.750 ms 00:20:33.155 [2024-12-09 14:13:34.904153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.911871] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:33.155 [2024-12-09 14:13:34.923544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.923654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:33.155 [2024-12-09 14:13:34.923691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.314 ms 00:20:33.155 [2024-12-09 14:13:34.923713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.923797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.923818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:33.155 [2024-12-09 14:13:34.923834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:33.155 [2024-12-09 14:13:34.923849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.923896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.923913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:33.155 [2024-12-09 14:13:34.923929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:33.155 [2024-12-09 14:13:34.923996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.924035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.924053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:33.155 [2024-12-09 14:13:34.924068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:33.155 [2024-12-09 14:13:34.924082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-12-09 14:13:34.924114] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:33.155 [2024-12-09 14:13:34.924131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-12-09 14:13:34.924257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:33.155 [2024-12-09 14:13:34.924272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:33.155 [2024-12-09 14:13:34.924285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.441 [2024-12-09 14:13:34.942203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.441 [2024-12-09 14:13:34.942295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:33.441 [2024-12-09 14:13:34.942334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.890 ms 00:20:33.441 [2024-12-09 14:13:34.942351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.441 [2024-12-09 14:13:34.942423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.441 [2024-12-09 14:13:34.942444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:33.441 [2024-12-09 14:13:34.942459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:33.441 [2024-12-09 14:13:34.942473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.441 [2024-12-09 14:13:34.943357] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:33.441 [2024-12-09 14:13:34.945898] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.461 ms, result 0 00:20:33.441 [2024-12-09 14:13:34.946777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.441 [2024-12-09 14:13:34.957699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:34.390  [2024-12-09T14:13:37.130Z] Copying: 20/256 [MB] (20 MBps) [2024-12-09T14:13:38.072Z] Copying: 32/256 [MB] (11 MBps) [2024-12-09T14:13:39.015Z] Copying: 50/256 [MB] (18 MBps) [2024-12-09T14:13:40.399Z] Copying: 69/256 [MB] (19 MBps) [2024-12-09T14:13:40.971Z] Copying: 89/256 [MB] (19 MBps) [2024-12-09T14:13:42.357Z] Copying: 109/256 [MB] (19 MBps) [2024-12-09T14:13:43.299Z] Copying: 124/256 [MB] (15 MBps) [2024-12-09T14:13:44.239Z] Copying: 152/256 [MB] (27 MBps) [2024-12-09T14:13:45.193Z] Copying: 175/256 [MB] (23 MBps) [2024-12-09T14:13:46.137Z] Copying: 196/256 [MB] (20 MBps) [2024-12-09T14:13:47.078Z] Copying: 217/256 [MB] (21 MBps) [2024-12-09T14:13:48.023Z] Copying: 238/256 [MB] (20 MBps) [2024-12-09T14:13:48.284Z] Copying: 252/256 [MB] (14 MBps) [2024-12-09T14:13:48.284Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-09 14:13:48.134377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.490 [2024-12-09 14:13:48.143980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.144019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:46.490 [2024-12-09 14:13:48.144038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:46.490 [2024-12-09 14:13:48.144046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.490 [2024-12-09 14:13:48.144066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:46.490 [2024-12-09 14:13:48.146913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.146946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:46.490 [2024-12-09 14:13:48.146956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:20:46.490 [2024-12-09 14:13:48.146964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.490 [2024-12-09 14:13:48.147221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.147230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:46.490 [2024-12-09 14:13:48.147239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:20:46.490 [2024-12-09 14:13:48.147246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.490 [2024-12-09 14:13:48.151071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.151160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:46.490 [2024-12-09 14:13:48.151213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:20:46.490 [2024-12-09 14:13:48.151236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.490 [2024-12-09 14:13:48.158328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.158450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:46.490 [2024-12-09 14:13:48.158502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.059 ms 00:20:46.490 [2024-12-09 14:13:48.158524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.490 [2024-12-09 14:13:48.183578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.183718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:46.490 [2024-12-09 14:13:48.183775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.966 ms 00:20:46.490 [2024-12-09 14:13:48.183797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.490 [2024-12-09 14:13:48.198853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.198999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:46.490 [2024-12-09 14:13:48.199076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.821 ms 00:20:46.490 [2024-12-09 14:13:48.199100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.490 [2024-12-09 14:13:48.199253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.199428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:46.490 [2024-12-09 14:13:48.199465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:46.490 [2024-12-09 14:13:48.199483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.490 [2024-12-09 14:13:48.225378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.225548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:46.490 [2024-12-09 14:13:48.225621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.861 ms 00:20:46.490 [2024-12-09 14:13:48.225645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.490 [2024-12-09 14:13:48.252591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.252765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:46.490 [2024-12-09 14:13:48.252823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.850 ms 00:20:46.490 [2024-12-09 14:13:48.252844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.490 [2024-12-09 14:13:48.279319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.490 [2024-12-09 14:13:48.279494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:46.490 [2024-12-09 14:13:48.279582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.358 ms 00:20:46.490 [2024-12-09 14:13:48.279617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.753 [2024-12-09 14:13:48.305812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.753 [2024-12-09 14:13:48.305984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:46.753 [2024-12-09 14:13:48.306041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.074 ms 00:20:46.753 [2024-12-09 14:13:48.306062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.753 [2024-12-09 14:13:48.306257] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:46.753 [2024-12-09 14:13:48.306291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:46.753 [2024-12-09 14:13:48.306505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.306999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:46.754 [2024-12-09 14:13:48.307105] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:46.754 [2024-12-09 14:13:48.307113] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 34e99555-49f3-4d3d-b544-1318be7f7bb8 00:20:46.754 [2024-12-09 14:13:48.307122] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:46.754 [2024-12-09 14:13:48.307129] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:46.754 [2024-12-09 14:13:48.307136] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:46.754 [2024-12-09 14:13:48.307144] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:46.754 [2024-12-09 14:13:48.307152] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:46.754 [2024-12-09 14:13:48.307159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:46.754 [2024-12-09 14:13:48.307170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:46.754 [2024-12-09 14:13:48.307176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:46.754 [2024-12-09 14:13:48.307183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:46.754 [2024-12-09 14:13:48.307191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.754 [2024-12-09 14:13:48.307199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:46.754 [2024-12-09 14:13:48.307208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:20:46.755 [2024-12-09 14:13:48.307216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.321297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.755 [2024-12-09 14:13:48.321470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:46.755 [2024-12-09 14:13:48.321487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.057 ms 00:20:46.755 [2024-12-09 14:13:48.321495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.321960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.755 [2024-12-09 14:13:48.321980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:46.755 [2024-12-09 14:13:48.321991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:20:46.755 [2024-12-09 14:13:48.321998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.361399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.361451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.755 [2024-12-09 14:13:48.361463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.361478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.361609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.361620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.755 [2024-12-09 14:13:48.361630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.361638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.361698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.361708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.755 [2024-12-09 14:13:48.361717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.361725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.361746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.361755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.755 [2024-12-09 14:13:48.361763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.361771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.445781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.445841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.755 [2024-12-09 14:13:48.445854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.445863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.515787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.516044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.755 [2024-12-09 14:13:48.516068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.516077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.516160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.516170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.755 [2024-12-09 14:13:48.516179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.516188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.516221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.516239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.755 [2024-12-09 14:13:48.516248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.516257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.516369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.516380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.755 [2024-12-09 14:13:48.516389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.516397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.516433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.516444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:46.755 [2024-12-09 14:13:48.516456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.516464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.516507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.516518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.755 [2024-12-09 14:13:48.516526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.516572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.516645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.755 [2024-12-09 14:13:48.516667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.755 [2024-12-09 14:13:48.516680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.755 [2024-12-09 14:13:48.516692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.755 [2024-12-09 14:13:48.516911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.899 ms, result 0 00:20:47.700 00:20:47.700 00:20:47.700 14:13:49 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:47.700 14:13:49 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:48.273 14:13:49 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:48.273 [2024-12-09 14:13:49.960049] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:48.273 [2024-12-09 14:13:49.960184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76955 ] 00:20:48.535 [2024-12-09 14:13:50.119409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.535 [2024-12-09 14:13:50.253830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.797 [2024-12-09 14:13:50.555627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:48.797 [2024-12-09 14:13:50.555720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:49.061 [2024-12-09 14:13:50.718759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.718832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:49.061 [2024-12-09 14:13:50.718848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:49.061 [2024-12-09 14:13:50.718857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.721927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.722142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.061 [2024-12-09 14:13:50.722164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.050 ms 00:20:49.061 [2024-12-09 14:13:50.722173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.722395] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:49.061 [2024-12-09 14:13:50.723170] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:49.061 [2024-12-09 14:13:50.723211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.723220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.061 [2024-12-09 14:13:50.723230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:20:49.061 [2024-12-09 14:13:50.723238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.725337] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:49.061 [2024-12-09 14:13:50.740026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.740079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:49.061 [2024-12-09 14:13:50.740093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.692 ms 00:20:49.061 [2024-12-09 14:13:50.740101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.740234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.740248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:49.061 [2024-12-09 14:13:50.740258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:49.061 [2024-12-09 14:13:50.740267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.748791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.748837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.061 [2024-12-09 14:13:50.748848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.476 ms 00:20:49.061 [2024-12-09 14:13:50.748857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.748971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.748982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.061 [2024-12-09 14:13:50.748992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:49.061 [2024-12-09 14:13:50.749000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.749032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.749041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:49.061 [2024-12-09 14:13:50.749050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:49.061 [2024-12-09 14:13:50.749057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.749080] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:49.061 [2024-12-09 14:13:50.753105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.753149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.061 [2024-12-09 14:13:50.753159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.032 ms 00:20:49.061 [2024-12-09 14:13:50.753168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.753266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.753277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:49.061 [2024-12-09 14:13:50.753286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:49.061 [2024-12-09 14:13:50.753294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.753319] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:49.061 [2024-12-09 14:13:50.753342] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:49.061 [2024-12-09 14:13:50.753380] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:49.061 [2024-12-09 14:13:50.753396] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:49.061 [2024-12-09 14:13:50.753502] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:49.061 [2024-12-09 14:13:50.753514] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:49.061 [2024-12-09 14:13:50.753525] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:49.061 [2024-12-09 14:13:50.753569] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:49.061 [2024-12-09 14:13:50.753579] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:49.061 [2024-12-09 14:13:50.753587] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:49.061 [2024-12-09 14:13:50.753596] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:49.061 [2024-12-09 14:13:50.753605] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:49.061 [2024-12-09 14:13:50.753612] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:49.061 [2024-12-09 14:13:50.753620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.753629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:49.061 [2024-12-09 14:13:50.753637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:49.061 [2024-12-09 14:13:50.753645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.753735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.061 [2024-12-09 14:13:50.753748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:49.061 [2024-12-09 14:13:50.753756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:49.061 [2024-12-09 14:13:50.753764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.061 [2024-12-09 14:13:50.753866] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:49.061 [2024-12-09 14:13:50.753877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:49.061 [2024-12-09 14:13:50.753886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:49.061 [2024-12-09 14:13:50.753894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.061 [2024-12-09 14:13:50.753902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:49.061 [2024-12-09 14:13:50.753909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:49.062 [2024-12-09 14:13:50.753916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:49.062 [2024-12-09 14:13:50.753923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:49.062 [2024-12-09 14:13:50.753931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:49.062 [2024-12-09 14:13:50.753938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:49.062 [2024-12-09 14:13:50.753945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:49.062 [2024-12-09 14:13:50.753960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:49.062 [2024-12-09 14:13:50.753966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:49.062 [2024-12-09 14:13:50.753974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:49.062 [2024-12-09 14:13:50.753981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:49.062 [2024-12-09 14:13:50.753989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.062 [2024-12-09 14:13:50.753996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:49.062 [2024-12-09 14:13:50.754003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:49.062 [2024-12-09 14:13:50.754010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.062 [2024-12-09 14:13:50.754017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:49.062 [2024-12-09 14:13:50.754024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:49.062 [2024-12-09 14:13:50.754030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.062 [2024-12-09 14:13:50.754036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:49.062 [2024-12-09 14:13:50.754043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:49.062 [2024-12-09 14:13:50.754049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.062 [2024-12-09 14:13:50.754056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:49.062 [2024-12-09 14:13:50.754064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:49.062 [2024-12-09 14:13:50.754071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.062 [2024-12-09 14:13:50.754078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:49.062 [2024-12-09 14:13:50.754085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:49.062 [2024-12-09 14:13:50.754093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.062 [2024-12-09 14:13:50.754100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:49.062 [2024-12-09 14:13:50.754107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:49.062 [2024-12-09 14:13:50.754114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:49.062 [2024-12-09 14:13:50.754120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:49.062 [2024-12-09 14:13:50.754126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:49.062 [2024-12-09 14:13:50.754133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:49.062 [2024-12-09 14:13:50.754140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:49.062 [2024-12-09 14:13:50.754146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:49.062 [2024-12-09 14:13:50.754154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.062 [2024-12-09 14:13:50.754160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:49.062 [2024-12-09 14:13:50.754167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:49.062 [2024-12-09 14:13:50.754173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.062 [2024-12-09 14:13:50.754180] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:49.062 [2024-12-09 14:13:50.754188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:49.062 [2024-12-09 14:13:50.754197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:49.062 [2024-12-09 14:13:50.754204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.062 [2024-12-09 14:13:50.754213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:49.062 [2024-12-09 14:13:50.754220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:49.062 [2024-12-09 14:13:50.754227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:49.062 [2024-12-09 14:13:50.754233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:49.062 [2024-12-09 14:13:50.754239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:49.062 [2024-12-09 14:13:50.754246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:49.062 [2024-12-09 14:13:50.754255] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:49.062 [2024-12-09 14:13:50.754265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:49.062 [2024-12-09 14:13:50.754273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:49.062 [2024-12-09 14:13:50.754280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:49.062 [2024-12-09 14:13:50.754287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:49.062 [2024-12-09 14:13:50.754302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:49.062 [2024-12-09 14:13:50.754309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:49.062 [2024-12-09 14:13:50.754316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:49.062 [2024-12-09 14:13:50.754323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:49.062 [2024-12-09 14:13:50.754330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:49.062 [2024-12-09 14:13:50.754337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:49.062 [2024-12-09 14:13:50.754344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:49.062 [2024-12-09 14:13:50.754351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:49.062 [2024-12-09 14:13:50.754358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:49.062 [2024-12-09 14:13:50.754365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:49.062 [2024-12-09 14:13:50.754373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:49.062 [2024-12-09 14:13:50.754380] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:49.062 [2024-12-09 14:13:50.754390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:49.062 [2024-12-09 14:13:50.754398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:49.062 [2024-12-09 14:13:50.754405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:49.062 [2024-12-09 14:13:50.754413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:49.062 [2024-12-09 14:13:50.754420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:49.062 [2024-12-09 14:13:50.754428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.062 [2024-12-09 14:13:50.754439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:49.062 [2024-12-09 14:13:50.754446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:20:49.062 [2024-12-09 14:13:50.754455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.062 [2024-12-09 14:13:50.786580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.062 [2024-12-09 14:13:50.786620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.062 [2024-12-09 14:13:50.786631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.066 ms 00:20:49.062 [2024-12-09 14:13:50.786639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.062 [2024-12-09 14:13:50.786779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.062 [2024-12-09 14:13:50.786790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:49.062 [2024-12-09 14:13:50.786799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:49.062 [2024-12-09 14:13:50.786807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.062 [2024-12-09 14:13:50.834519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.062 [2024-12-09 14:13:50.834590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.062 [2024-12-09 14:13:50.834613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.689 ms 00:20:49.062 [2024-12-09 14:13:50.834622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.062 [2024-12-09 14:13:50.834743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.062 [2024-12-09 14:13:50.834756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.062 [2024-12-09 14:13:50.834766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:49.062 [2024-12-09 14:13:50.834774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.063 [2024-12-09 14:13:50.835342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.063 [2024-12-09 14:13:50.835388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.063 [2024-12-09 14:13:50.835407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:20:49.063 [2024-12-09 14:13:50.835416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.063 [2024-12-09 14:13:50.835595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.063 [2024-12-09 14:13:50.835607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.063 [2024-12-09 14:13:50.835615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:20:49.063 [2024-12-09 14:13:50.835623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.063 [2024-12-09 14:13:50.852310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.063 [2024-12-09 14:13:50.852361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.063 [2024-12-09 14:13:50.852372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.663 ms 00:20:49.063 [2024-12-09 14:13:50.852382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:50.867007] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:49.325 [2024-12-09 14:13:50.867057] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:49.325 [2024-12-09 14:13:50.867070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:50.867078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:49.325 [2024-12-09 14:13:50.867089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.573 ms 00:20:49.325 [2024-12-09 14:13:50.867097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:50.893348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:50.893402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:49.325 [2024-12-09 14:13:50.893415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.150 ms 00:20:49.325 [2024-12-09 14:13:50.893425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:50.906698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:50.906748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:49.325 [2024-12-09 14:13:50.906760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.172 ms 00:20:49.325 [2024-12-09 14:13:50.906767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:50.919903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:50.919950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:49.325 [2024-12-09 14:13:50.919962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.039 ms 00:20:49.325 [2024-12-09 14:13:50.919970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:50.920662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:50.920697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:49.325 [2024-12-09 14:13:50.920708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:20:49.325 [2024-12-09 14:13:50.920715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:50.987436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:50.987508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:49.325 [2024-12-09 14:13:50.987526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.691 ms 00:20:49.325 [2024-12-09 14:13:50.987555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:50.999151] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:49.325 [2024-12-09 14:13:51.019975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:51.020033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:49.325 [2024-12-09 14:13:51.020048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.292 ms 00:20:49.325 [2024-12-09 14:13:51.020064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:51.020184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:51.020197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:49.325 [2024-12-09 14:13:51.020207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:49.325 [2024-12-09 14:13:51.020217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:51.020279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:51.020289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:49.325 [2024-12-09 14:13:51.020299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:49.325 [2024-12-09 14:13:51.020311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:51.020345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:51.020354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:49.325 [2024-12-09 14:13:51.020362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:49.325 [2024-12-09 14:13:51.020371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:51.020412] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:49.325 [2024-12-09 14:13:51.020423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:51.020431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:49.325 [2024-12-09 14:13:51.020439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:49.325 [2024-12-09 14:13:51.020448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:51.047821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:51.047882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:49.325 [2024-12-09 14:13:51.047898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.349 ms 00:20:49.325 [2024-12-09 14:13:51.047907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:51.048051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.325 [2024-12-09 14:13:51.048065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:49.325 [2024-12-09 14:13:51.048076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:49.325 [2024-12-09 14:13:51.048084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.325 [2024-12-09 14:13:51.049734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:49.325 [2024-12-09 14:13:51.053285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.598 ms, result 0 00:20:49.325 [2024-12-09 14:13:51.054486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:49.325 [2024-12-09 14:13:51.068449] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:49.901  [2024-12-09T14:13:51.695Z] Copying: 4096/4096 [kB] (average 9683 kBps)[2024-12-09 14:13:51.494772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:49.901 [2024-12-09 14:13:51.504673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.901 [2024-12-09 14:13:51.504728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:49.901 [2024-12-09 14:13:51.504752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:49.901 [2024-12-09 14:13:51.504761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.901 [2024-12-09 14:13:51.504785] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:49.901 [2024-12-09 14:13:51.507646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.901 [2024-12-09 14:13:51.507691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:49.901 [2024-12-09 14:13:51.507704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:20:49.901 [2024-12-09 14:13:51.507712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.901 [2024-12-09 14:13:51.510734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.901 [2024-12-09 14:13:51.510784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:49.901 [2024-12-09 14:13:51.510795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.990 ms 00:20:49.901 [2024-12-09 14:13:51.510804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.901 [2024-12-09 14:13:51.515087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.901 [2024-12-09 14:13:51.515135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:49.901 [2024-12-09 14:13:51.515146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.260 ms 00:20:49.901 [2024-12-09 14:13:51.515154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.901 [2024-12-09 14:13:51.522251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.901 [2024-12-09 14:13:51.522293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:49.901 [2024-12-09 14:13:51.522305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.061 ms 00:20:49.901 [2024-12-09 14:13:51.522312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.901 [2024-12-09 14:13:51.549034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.901 [2024-12-09 14:13:51.549087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:49.901 [2024-12-09 14:13:51.549100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.667 ms 00:20:49.901 [2024-12-09 14:13:51.549109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.901 [2024-12-09 14:13:51.565141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.901 [2024-12-09 14:13:51.565220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:49.901 [2024-12-09 14:13:51.565235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.961 ms 00:20:49.901 [2024-12-09 14:13:51.565243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.901 [2024-12-09 14:13:51.565406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.901 [2024-12-09 14:13:51.565418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:49.901 [2024-12-09 14:13:51.565437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:49.901 [2024-12-09 14:13:51.565446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.901 [2024-12-09 14:13:51.592221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.901 [2024-12-09 14:13:51.592287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:49.901 [2024-12-09 14:13:51.592298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.757 ms 00:20:49.901 [2024-12-09 14:13:51.592305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.901 [2024-12-09 14:13:51.618395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.901 [2024-12-09 14:13:51.618444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:49.902 [2024-12-09 14:13:51.618455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.022 ms 00:20:49.902 [2024-12-09 14:13:51.618462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.902 [2024-12-09 14:13:51.643797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.902 [2024-12-09 14:13:51.643848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:49.902 [2024-12-09 14:13:51.643859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.269 ms 00:20:49.902 [2024-12-09 14:13:51.643866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.902 [2024-12-09 14:13:51.669490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.902 [2024-12-09 14:13:51.669556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:49.902 [2024-12-09 14:13:51.669569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.512 ms 00:20:49.902 [2024-12-09 14:13:51.669576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.902 [2024-12-09 14:13:51.669639] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:49.902 [2024-12-09 14:13:51.669656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.669996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:49.902 [2024-12-09 14:13:51.670215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:49.903 [2024-12-09 14:13:51.670448] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:49.903 [2024-12-09 14:13:51.670456] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 34e99555-49f3-4d3d-b544-1318be7f7bb8 00:20:49.903 [2024-12-09 14:13:51.670466] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:49.903 [2024-12-09 14:13:51.670473] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:49.903 [2024-12-09 14:13:51.670480] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:49.903 [2024-12-09 14:13:51.670488] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:49.903 [2024-12-09 14:13:51.670496] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:49.903 [2024-12-09 14:13:51.670504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:49.903 [2024-12-09 14:13:51.670515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:49.903 [2024-12-09 14:13:51.670521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:49.903 [2024-12-09 14:13:51.670528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:49.903 [2024-12-09 14:13:51.670559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.903 [2024-12-09 14:13:51.670567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:49.903 [2024-12-09 14:13:51.670577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:20:49.903 [2024-12-09 14:13:51.670585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.903 [2024-12-09 14:13:51.684061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.903 [2024-12-09 14:13:51.684106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:49.903 [2024-12-09 14:13:51.684116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.456 ms 00:20:49.903 [2024-12-09 14:13:51.684125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.903 [2024-12-09 14:13:51.684552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.903 [2024-12-09 14:13:51.684573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:49.903 [2024-12-09 14:13:51.684583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:20:49.903 [2024-12-09 14:13:51.684590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.724001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.724057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:50.166 [2024-12-09 14:13:51.724069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.724084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.724167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.724176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:50.166 [2024-12-09 14:13:51.724185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.724193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.724246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.724256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:50.166 [2024-12-09 14:13:51.724266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.724273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.724295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.724303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:50.166 [2024-12-09 14:13:51.724311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.724318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.809435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.809493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:50.166 [2024-12-09 14:13:51.809507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.809523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.879147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.879208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.166 [2024-12-09 14:13:51.879221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.879231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.879289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.879299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.166 [2024-12-09 14:13:51.879308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.879318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.879351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.879368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.166 [2024-12-09 14:13:51.879377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.879386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.879489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.879501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.166 [2024-12-09 14:13:51.879510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.879518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.879576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.879587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:50.166 [2024-12-09 14:13:51.879599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.879607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.879654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.879664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.166 [2024-12-09 14:13:51.879673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.879681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.879729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:50.166 [2024-12-09 14:13:51.879743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.166 [2024-12-09 14:13:51.879752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:50.166 [2024-12-09 14:13:51.879761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-12-09 14:13:51.879923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.236 ms, result 0 00:20:51.111 00:20:51.111 00:20:51.111 14:13:52 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:51.111 14:13:52 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76991 00:20:51.111 14:13:52 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76991 00:20:51.111 14:13:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76991 ']' 00:20:51.111 14:13:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.111 14:13:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.112 14:13:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.112 14:13:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.112 14:13:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:51.112 [2024-12-09 14:13:52.769336] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:51.112 [2024-12-09 14:13:52.769471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76991 ] 00:20:51.374 [2024-12-09 14:13:52.935940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.374 [2024-12-09 14:13:53.062762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.379 14:13:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.379 14:13:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:52.379 14:13:53 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:52.379 [2024-12-09 14:13:53.983318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:52.379 [2024-12-09 14:13:53.983398] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:52.642 [2024-12-09 14:13:54.163207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-12-09 14:13:54.163268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:52.642 [2024-12-09 14:13:54.163286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:52.642 [2024-12-09 14:13:54.163295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-12-09 14:13:54.166353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-12-09 14:13:54.166403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.642 [2024-12-09 14:13:54.166417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:20:52.642 [2024-12-09 14:13:54.166425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-12-09 14:13:54.166579] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:52.642 [2024-12-09 14:13:54.167347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:52.642 [2024-12-09 14:13:54.167375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-12-09 14:13:54.167384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.642 [2024-12-09 14:13:54.167396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:20:52.642 [2024-12-09 14:13:54.167404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-12-09 14:13:54.169803] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:52.642 [2024-12-09 14:13:54.184348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-12-09 14:13:54.184407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:52.642 [2024-12-09 14:13:54.184423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.551 ms 00:20:52.642 [2024-12-09 14:13:54.184434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-12-09 14:13:54.184578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-12-09 14:13:54.184594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:52.642 [2024-12-09 14:13:54.184605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:52.642 [2024-12-09 14:13:54.184616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-12-09 14:13:54.193215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-12-09 14:13:54.193264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.642 [2024-12-09 14:13:54.193274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.542 ms 00:20:52.642 [2024-12-09 14:13:54.193285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-12-09 14:13:54.193409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.642 [2024-12-09 14:13:54.193422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.642 [2024-12-09 14:13:54.193431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:52.642 [2024-12-09 14:13:54.193445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.642 [2024-12-09 14:13:54.193475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.643 [2024-12-09 14:13:54.193486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:52.643 [2024-12-09 14:13:54.193494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:52.643 [2024-12-09 14:13:54.193503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.643 [2024-12-09 14:13:54.193529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:52.643 [2024-12-09 14:13:54.197520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.643 [2024-12-09 14:13:54.197568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.643 [2024-12-09 14:13:54.197581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.996 ms 00:20:52.643 [2024-12-09 14:13:54.197589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.643 [2024-12-09 14:13:54.197679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.643 [2024-12-09 14:13:54.197689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:52.643 [2024-12-09 14:13:54.197701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:52.643 [2024-12-09 14:13:54.197712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.643 [2024-12-09 14:13:54.197735] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:52.643 [2024-12-09 14:13:54.197758] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:52.643 [2024-12-09 14:13:54.197807] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:52.643 [2024-12-09 14:13:54.197824] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:52.643 [2024-12-09 14:13:54.197933] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:52.643 [2024-12-09 14:13:54.197944] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:52.643 [2024-12-09 14:13:54.197963] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:52.643 [2024-12-09 14:13:54.197973] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:52.643 [2024-12-09 14:13:54.197984] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:52.643 [2024-12-09 14:13:54.197992] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:52.643 [2024-12-09 14:13:54.198002] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:52.643 [2024-12-09 14:13:54.198010] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:52.643 [2024-12-09 14:13:54.198021] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:52.643 [2024-12-09 14:13:54.198029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.643 [2024-12-09 14:13:54.198040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:52.643 [2024-12-09 14:13:54.198048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:20:52.643 [2024-12-09 14:13:54.198057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.643 [2024-12-09 14:13:54.198146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.643 [2024-12-09 14:13:54.198166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:52.643 [2024-12-09 14:13:54.198174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:52.643 [2024-12-09 14:13:54.198183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.643 [2024-12-09 14:13:54.198283] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:52.643 [2024-12-09 14:13:54.198296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:52.643 [2024-12-09 14:13:54.198304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:52.643 [2024-12-09 14:13:54.198315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:52.643 [2024-12-09 14:13:54.198334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:52.643 [2024-12-09 14:13:54.198353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:52.643 [2024-12-09 14:13:54.198362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:52.643 [2024-12-09 14:13:54.198379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:52.643 [2024-12-09 14:13:54.198388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:52.643 [2024-12-09 14:13:54.198394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:52.643 [2024-12-09 14:13:54.198403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:52.643 [2024-12-09 14:13:54.198411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:52.643 [2024-12-09 14:13:54.198420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:52.643 [2024-12-09 14:13:54.198435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:52.643 [2024-12-09 14:13:54.198447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:52.643 [2024-12-09 14:13:54.198463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.643 [2024-12-09 14:13:54.198479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:52.643 [2024-12-09 14:13:54.198489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.643 [2024-12-09 14:13:54.198504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:52.643 [2024-12-09 14:13:54.198511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.643 [2024-12-09 14:13:54.198526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:52.643 [2024-12-09 14:13:54.198552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.643 [2024-12-09 14:13:54.198568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:52.643 [2024-12-09 14:13:54.198575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:52.643 [2024-12-09 14:13:54.198591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:52.643 [2024-12-09 14:13:54.198600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:52.643 [2024-12-09 14:13:54.198606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:52.643 [2024-12-09 14:13:54.198615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:52.643 [2024-12-09 14:13:54.198621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:52.643 [2024-12-09 14:13:54.198632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:52.643 [2024-12-09 14:13:54.198650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:52.643 [2024-12-09 14:13:54.198657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198666] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:52.643 [2024-12-09 14:13:54.198676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:52.643 [2024-12-09 14:13:54.198686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:52.643 [2024-12-09 14:13:54.198694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.643 [2024-12-09 14:13:54.198703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:52.643 [2024-12-09 14:13:54.198711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:52.643 [2024-12-09 14:13:54.198720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:52.643 [2024-12-09 14:13:54.198727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:52.643 [2024-12-09 14:13:54.198736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:52.643 [2024-12-09 14:13:54.198742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:52.643 [2024-12-09 14:13:54.198753] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:52.643 [2024-12-09 14:13:54.198763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:52.643 [2024-12-09 14:13:54.198777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:52.643 [2024-12-09 14:13:54.198784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:52.643 [2024-12-09 14:13:54.198794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:52.643 [2024-12-09 14:13:54.198801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:52.643 [2024-12-09 14:13:54.198810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:52.643 [2024-12-09 14:13:54.198817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:52.644 [2024-12-09 14:13:54.198826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:52.644 [2024-12-09 14:13:54.198834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:52.644 [2024-12-09 14:13:54.198843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:52.644 [2024-12-09 14:13:54.198850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:52.644 [2024-12-09 14:13:54.198859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:52.644 [2024-12-09 14:13:54.198866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:52.644 [2024-12-09 14:13:54.198875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:52.644 [2024-12-09 14:13:54.198883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:52.644 [2024-12-09 14:13:54.198892] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:52.644 [2024-12-09 14:13:54.198901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:52.644 [2024-12-09 14:13:54.198913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:52.644 [2024-12-09 14:13:54.198922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:52.644 [2024-12-09 14:13:54.198932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:52.644 [2024-12-09 14:13:54.198939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:52.644 [2024-12-09 14:13:54.198949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.198956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:52.644 [2024-12-09 14:13:54.198966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:20:52.644 [2024-12-09 14:13:54.198976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.231517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.231586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.644 [2024-12-09 14:13:54.231601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.475 ms 00:20:52.644 [2024-12-09 14:13:54.231612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.231747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.231758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:52.644 [2024-12-09 14:13:54.231769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:52.644 [2024-12-09 14:13:54.231776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.267313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.267359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.644 [2024-12-09 14:13:54.267373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.509 ms 00:20:52.644 [2024-12-09 14:13:54.267381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.267475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.267485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.644 [2024-12-09 14:13:54.267496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:52.644 [2024-12-09 14:13:54.267504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.268087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.268123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.644 [2024-12-09 14:13:54.268137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:20:52.644 [2024-12-09 14:13:54.268145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.268300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.268309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.644 [2024-12-09 14:13:54.268319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:52.644 [2024-12-09 14:13:54.268328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.286579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.286619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.644 [2024-12-09 14:13:54.286633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.225 ms 00:20:52.644 [2024-12-09 14:13:54.286641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.313559] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:52.644 [2024-12-09 14:13:54.313610] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:52.644 [2024-12-09 14:13:54.313630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.313640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:52.644 [2024-12-09 14:13:54.313652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.869 ms 00:20:52.644 [2024-12-09 14:13:54.313667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.339883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.339930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:52.644 [2024-12-09 14:13:54.339946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.102 ms 00:20:52.644 [2024-12-09 14:13:54.339955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.353398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.353443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:52.644 [2024-12-09 14:13:54.353461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.333 ms 00:20:52.644 [2024-12-09 14:13:54.353469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.366558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.366600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:52.644 [2024-12-09 14:13:54.366615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.960 ms 00:20:52.644 [2024-12-09 14:13:54.366622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.644 [2024-12-09 14:13:54.367293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.644 [2024-12-09 14:13:54.367320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:52.644 [2024-12-09 14:13:54.367333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:20:52.644 [2024-12-09 14:13:54.367342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.907 [2024-12-09 14:13:54.433214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.907 [2024-12-09 14:13:54.433274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:52.907 [2024-12-09 14:13:54.433294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.828 ms 00:20:52.907 [2024-12-09 14:13:54.433303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.907 [2024-12-09 14:13:54.444727] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:52.907 [2024-12-09 14:13:54.464232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.907 [2024-12-09 14:13:54.464290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:52.907 [2024-12-09 14:13:54.464307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.821 ms 00:20:52.907 [2024-12-09 14:13:54.464318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.907 [2024-12-09 14:13:54.464415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.907 [2024-12-09 14:13:54.464429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:52.907 [2024-12-09 14:13:54.464439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:52.907 [2024-12-09 14:13:54.464450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.907 [2024-12-09 14:13:54.464508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.907 [2024-12-09 14:13:54.464520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:52.907 [2024-12-09 14:13:54.464528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:52.907 [2024-12-09 14:13:54.464564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.907 [2024-12-09 14:13:54.464591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.907 [2024-12-09 14:13:54.464602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:52.907 [2024-12-09 14:13:54.464612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:52.907 [2024-12-09 14:13:54.464625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.907 [2024-12-09 14:13:54.464664] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:52.907 [2024-12-09 14:13:54.464679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.907 [2024-12-09 14:13:54.464691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:52.907 [2024-12-09 14:13:54.464701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:52.907 [2024-12-09 14:13:54.464708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.907 [2024-12-09 14:13:54.491027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.907 [2024-12-09 14:13:54.491076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:52.907 [2024-12-09 14:13:54.491093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.285 ms 00:20:52.907 [2024-12-09 14:13:54.491102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.907 [2024-12-09 14:13:54.491238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.907 [2024-12-09 14:13:54.491250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:52.907 [2024-12-09 14:13:54.491263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:52.907 [2024-12-09 14:13:54.491274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.907 [2024-12-09 14:13:54.492468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:52.907 [2024-12-09 14:13:54.495943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.940 ms, result 0 00:20:52.907 [2024-12-09 14:13:54.497941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.907 Some configs were skipped because the RPC state that can call them passed over. 00:20:52.907 14:13:54 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:53.169 [2024-12-09 14:13:54.742945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.169 [2024-12-09 14:13:54.743016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:53.169 [2024-12-09 14:13:54.743032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:20:53.169 [2024-12-09 14:13:54.743043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.169 [2024-12-09 14:13:54.743082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.552 ms, result 0 00:20:53.169 true 00:20:53.169 14:13:54 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:53.169 [2024-12-09 14:13:54.946492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.169 [2024-12-09 14:13:54.946558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:53.169 [2024-12-09 14:13:54.946575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:20:53.169 [2024-12-09 14:13:54.946584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.169 [2024-12-09 14:13:54.946625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.847 ms, result 0 00:20:53.169 true 00:20:53.431 14:13:54 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76991 00:20:53.431 14:13:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76991 ']' 00:20:53.431 14:13:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76991 00:20:53.431 14:13:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:53.431 14:13:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.431 14:13:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76991 00:20:53.431 14:13:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:53.431 14:13:54 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:53.431 killing process with pid 76991 00:20:53.431 14:13:54 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76991' 00:20:53.431 14:13:54 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76991 00:20:53.431 14:13:54 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76991 00:20:54.003 [2024-12-09 14:13:55.719835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.719887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:54.003 [2024-12-09 14:13:55.719898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.003 [2024-12-09 14:13:55.719905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.719925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:54.003 [2024-12-09 14:13:55.722164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.722190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:54.003 [2024-12-09 14:13:55.722202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.223 ms 00:20:54.003 [2024-12-09 14:13:55.722208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.722432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.722439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:54.003 [2024-12-09 14:13:55.722447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:20:54.003 [2024-12-09 14:13:55.722453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.725490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.725516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:54.003 [2024-12-09 14:13:55.725528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.019 ms 00:20:54.003 [2024-12-09 14:13:55.725533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.730821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.730844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:54.003 [2024-12-09 14:13:55.730855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.247 ms 00:20:54.003 [2024-12-09 14:13:55.730861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.738969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.738999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:54.003 [2024-12-09 14:13:55.739010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.064 ms 00:20:54.003 [2024-12-09 14:13:55.739016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.745572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.745599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:54.003 [2024-12-09 14:13:55.745609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.522 ms 00:20:54.003 [2024-12-09 14:13:55.745616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.745720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.745729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:54.003 [2024-12-09 14:13:55.745736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:54.003 [2024-12-09 14:13:55.745742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.753644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.753666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:54.003 [2024-12-09 14:13:55.753675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.886 ms 00:20:54.003 [2024-12-09 14:13:55.753681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.761302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.761324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:54.003 [2024-12-09 14:13:55.761336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.590 ms 00:20:54.003 [2024-12-09 14:13:55.761341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.768511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.768533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:54.003 [2024-12-09 14:13:55.768548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.139 ms 00:20:54.003 [2024-12-09 14:13:55.768553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.775790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.003 [2024-12-09 14:13:55.775812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:54.003 [2024-12-09 14:13:55.775820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.187 ms 00:20:54.003 [2024-12-09 14:13:55.775826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.003 [2024-12-09 14:13:55.775859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:54.003 [2024-12-09 14:13:55.775870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.775995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:54.003 [2024-12-09 14:13:55.776090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:54.004 [2024-12-09 14:13:55.776516] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:54.004 [2024-12-09 14:13:55.776526] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 34e99555-49f3-4d3d-b544-1318be7f7bb8 00:20:54.004 [2024-12-09 14:13:55.776543] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:54.004 [2024-12-09 14:13:55.776550] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:54.004 [2024-12-09 14:13:55.776556] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:54.004 [2024-12-09 14:13:55.776563] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:54.004 [2024-12-09 14:13:55.776568] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:54.004 [2024-12-09 14:13:55.776575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:54.004 [2024-12-09 14:13:55.776581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:54.004 [2024-12-09 14:13:55.776587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:54.004 [2024-12-09 14:13:55.776592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:54.004 [2024-12-09 14:13:55.776599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.004 [2024-12-09 14:13:55.776605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:54.004 [2024-12-09 14:13:55.776613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:20:54.004 [2024-12-09 14:13:55.776618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.004 [2024-12-09 14:13:55.786277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.004 [2024-12-09 14:13:55.786297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:54.004 [2024-12-09 14:13:55.786308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.641 ms 00:20:54.004 [2024-12-09 14:13:55.786314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.004 [2024-12-09 14:13:55.786609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.004 [2024-12-09 14:13:55.786621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:54.004 [2024-12-09 14:13:55.786630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:54.005 [2024-12-09 14:13:55.786636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.821454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.821477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.263 [2024-12-09 14:13:55.821487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.821493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.821580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.821588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.263 [2024-12-09 14:13:55.821597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.821603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.821638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.821644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.263 [2024-12-09 14:13:55.821653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.821658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.821673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.821679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.263 [2024-12-09 14:13:55.821686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.821693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.881856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.881886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.263 [2024-12-09 14:13:55.881896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.881902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.930514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.930549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.263 [2024-12-09 14:13:55.930559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.930568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.930628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.930635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.263 [2024-12-09 14:13:55.930645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.930650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.930674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.930680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.263 [2024-12-09 14:13:55.930688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.930693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.930765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.930772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.263 [2024-12-09 14:13:55.930780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.930786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.930812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.930819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:54.263 [2024-12-09 14:13:55.930826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.930832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.930864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.263 [2024-12-09 14:13:55.930870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.263 [2024-12-09 14:13:55.930879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.263 [2024-12-09 14:13:55.930885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.263 [2024-12-09 14:13:55.930919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.264 [2024-12-09 14:13:55.930926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.264 [2024-12-09 14:13:55.930933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.264 [2024-12-09 14:13:55.930938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.264 [2024-12-09 14:13:55.931042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 211.190 ms, result 0 00:20:54.832 14:13:56 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:54.832 [2024-12-09 14:13:56.531243] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:54.832 [2024-12-09 14:13:56.531367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77044 ] 00:20:55.092 [2024-12-09 14:13:56.690707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.092 [2024-12-09 14:13:56.777471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.353 [2024-12-09 14:13:56.989089] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.353 [2024-12-09 14:13:56.989136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.353 [2024-12-09 14:13:57.137363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.353 [2024-12-09 14:13:57.137405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:55.353 [2024-12-09 14:13:57.137418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.353 [2024-12-09 14:13:57.137426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.353 [2024-12-09 14:13:57.140051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.353 [2024-12-09 14:13:57.140082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.353 [2024-12-09 14:13:57.140091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:20:55.353 [2024-12-09 14:13:57.140099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.353 [2024-12-09 14:13:57.140175] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:55.353 [2024-12-09 14:13:57.140845] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:55.353 [2024-12-09 14:13:57.140865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.353 [2024-12-09 14:13:57.140873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.353 [2024-12-09 14:13:57.140882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:20:55.353 [2024-12-09 14:13:57.140889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.353 [2024-12-09 14:13:57.142034] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:55.616 [2024-12-09 14:13:57.154665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.616 [2024-12-09 14:13:57.154694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:55.616 [2024-12-09 14:13:57.154705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.632 ms 00:20:55.616 [2024-12-09 14:13:57.154713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.616 [2024-12-09 14:13:57.154800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.616 [2024-12-09 14:13:57.154811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:55.616 [2024-12-09 14:13:57.154820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:55.616 [2024-12-09 14:13:57.154827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.616 [2024-12-09 14:13:57.159903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.616 [2024-12-09 14:13:57.159927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.616 [2024-12-09 14:13:57.159936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.034 ms 00:20:55.616 [2024-12-09 14:13:57.159944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.616 [2024-12-09 14:13:57.160031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.616 [2024-12-09 14:13:57.160040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.616 [2024-12-09 14:13:57.160048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:55.616 [2024-12-09 14:13:57.160056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.616 [2024-12-09 14:13:57.160082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.616 [2024-12-09 14:13:57.160090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:55.616 [2024-12-09 14:13:57.160098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:55.616 [2024-12-09 14:13:57.160105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.616 [2024-12-09 14:13:57.160124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:55.616 [2024-12-09 14:13:57.163617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.616 [2024-12-09 14:13:57.163641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.616 [2024-12-09 14:13:57.163649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.497 ms 00:20:55.616 [2024-12-09 14:13:57.163657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.616 [2024-12-09 14:13:57.163693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.616 [2024-12-09 14:13:57.163701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:55.616 [2024-12-09 14:13:57.163709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:55.616 [2024-12-09 14:13:57.163716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.616 [2024-12-09 14:13:57.163735] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:55.616 [2024-12-09 14:13:57.163754] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:55.616 [2024-12-09 14:13:57.163788] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:55.616 [2024-12-09 14:13:57.163803] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:55.616 [2024-12-09 14:13:57.163906] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:55.616 [2024-12-09 14:13:57.163917] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:55.616 [2024-12-09 14:13:57.163926] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:55.616 [2024-12-09 14:13:57.163938] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:55.616 [2024-12-09 14:13:57.163947] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:55.616 [2024-12-09 14:13:57.163955] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:55.616 [2024-12-09 14:13:57.163962] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:55.616 [2024-12-09 14:13:57.163969] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:55.616 [2024-12-09 14:13:57.163976] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:55.616 [2024-12-09 14:13:57.163983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.616 [2024-12-09 14:13:57.163990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:55.616 [2024-12-09 14:13:57.163998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:55.616 [2024-12-09 14:13:57.164004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.616 [2024-12-09 14:13:57.164102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.616 [2024-12-09 14:13:57.164113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:55.616 [2024-12-09 14:13:57.164120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:55.616 [2024-12-09 14:13:57.164127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.616 [2024-12-09 14:13:57.164227] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:55.616 [2024-12-09 14:13:57.164236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:55.616 [2024-12-09 14:13:57.164244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.616 [2024-12-09 14:13:57.164251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.616 [2024-12-09 14:13:57.164260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:55.616 [2024-12-09 14:13:57.164267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:55.616 [2024-12-09 14:13:57.164274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:55.616 [2024-12-09 14:13:57.164281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:55.616 [2024-12-09 14:13:57.164287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:55.616 [2024-12-09 14:13:57.164294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.616 [2024-12-09 14:13:57.164301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:55.616 [2024-12-09 14:13:57.164314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:55.616 [2024-12-09 14:13:57.164320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.616 [2024-12-09 14:13:57.164327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:55.616 [2024-12-09 14:13:57.164333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:55.616 [2024-12-09 14:13:57.164339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.616 [2024-12-09 14:13:57.164347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:55.616 [2024-12-09 14:13:57.164353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:55.616 [2024-12-09 14:13:57.164360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.616 [2024-12-09 14:13:57.164366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:55.616 [2024-12-09 14:13:57.164373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:55.616 [2024-12-09 14:13:57.164379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.616 [2024-12-09 14:13:57.164385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:55.616 [2024-12-09 14:13:57.164392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:55.616 [2024-12-09 14:13:57.164398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.616 [2024-12-09 14:13:57.164404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:55.616 [2024-12-09 14:13:57.164410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:55.616 [2024-12-09 14:13:57.164416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.616 [2024-12-09 14:13:57.164422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:55.616 [2024-12-09 14:13:57.164429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:55.616 [2024-12-09 14:13:57.164435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.616 [2024-12-09 14:13:57.164441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:55.616 [2024-12-09 14:13:57.164447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:55.616 [2024-12-09 14:13:57.164453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.616 [2024-12-09 14:13:57.164459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:55.617 [2024-12-09 14:13:57.164466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:55.617 [2024-12-09 14:13:57.164474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.617 [2024-12-09 14:13:57.164481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:55.617 [2024-12-09 14:13:57.164487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:55.617 [2024-12-09 14:13:57.164494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.617 [2024-12-09 14:13:57.164500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:55.617 [2024-12-09 14:13:57.164506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:55.617 [2024-12-09 14:13:57.164513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.617 [2024-12-09 14:13:57.164519] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:55.617 [2024-12-09 14:13:57.164526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:55.617 [2024-12-09 14:13:57.164552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.617 [2024-12-09 14:13:57.164560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.617 [2024-12-09 14:13:57.164567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:55.617 [2024-12-09 14:13:57.164575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:55.617 [2024-12-09 14:13:57.164581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:55.617 [2024-12-09 14:13:57.164588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:55.617 [2024-12-09 14:13:57.164594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:55.617 [2024-12-09 14:13:57.164601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:55.617 [2024-12-09 14:13:57.164609] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:55.617 [2024-12-09 14:13:57.164618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.617 [2024-12-09 14:13:57.164626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:55.617 [2024-12-09 14:13:57.164634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:55.617 [2024-12-09 14:13:57.164641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:55.617 [2024-12-09 14:13:57.164647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:55.617 [2024-12-09 14:13:57.164654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:55.617 [2024-12-09 14:13:57.164661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:55.617 [2024-12-09 14:13:57.164668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:55.617 [2024-12-09 14:13:57.164674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:55.617 [2024-12-09 14:13:57.164681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:55.617 [2024-12-09 14:13:57.164687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:55.617 [2024-12-09 14:13:57.164694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:55.617 [2024-12-09 14:13:57.164701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:55.617 [2024-12-09 14:13:57.164707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:55.617 [2024-12-09 14:13:57.164718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:55.617 [2024-12-09 14:13:57.164724] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:55.617 [2024-12-09 14:13:57.164732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.617 [2024-12-09 14:13:57.164740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:55.617 [2024-12-09 14:13:57.164747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:55.617 [2024-12-09 14:13:57.164754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:55.617 [2024-12-09 14:13:57.164761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:55.617 [2024-12-09 14:13:57.164768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.164778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:55.617 [2024-12-09 14:13:57.164785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:20:55.617 [2024-12-09 14:13:57.164792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.617 [2024-12-09 14:13:57.191523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.191566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.617 [2024-12-09 14:13:57.191576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.663 ms 00:20:55.617 [2024-12-09 14:13:57.191583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.617 [2024-12-09 14:13:57.191701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.191712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:55.617 [2024-12-09 14:13:57.191720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:55.617 [2024-12-09 14:13:57.191727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.617 [2024-12-09 14:13:57.232276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.232316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.617 [2024-12-09 14:13:57.232332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.527 ms 00:20:55.617 [2024-12-09 14:13:57.232341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.617 [2024-12-09 14:13:57.232440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.232452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.617 [2024-12-09 14:13:57.232461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:55.617 [2024-12-09 14:13:57.232469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.617 [2024-12-09 14:13:57.232941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.232972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.617 [2024-12-09 14:13:57.232990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:20:55.617 [2024-12-09 14:13:57.232998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.617 [2024-12-09 14:13:57.233145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.233155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.617 [2024-12-09 14:13:57.233163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:20:55.617 [2024-12-09 14:13:57.233171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.617 [2024-12-09 14:13:57.248492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.248531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.617 [2024-12-09 14:13:57.248565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.300 ms 00:20:55.617 [2024-12-09 14:13:57.248573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.617 [2024-12-09 14:13:57.262623] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:55.617 [2024-12-09 14:13:57.262666] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:55.617 [2024-12-09 14:13:57.262679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.262688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:55.617 [2024-12-09 14:13:57.262698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.998 ms 00:20:55.617 [2024-12-09 14:13:57.262706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.617 [2024-12-09 14:13:57.288327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.288372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:55.617 [2024-12-09 14:13:57.288386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.525 ms 00:20:55.617 [2024-12-09 14:13:57.288395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.617 [2024-12-09 14:13:57.301146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.617 [2024-12-09 14:13:57.301186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:55.618 [2024-12-09 14:13:57.301206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.654 ms 00:20:55.618 [2024-12-09 14:13:57.301214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.618 [2024-12-09 14:13:57.313893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.618 [2024-12-09 14:13:57.313944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:55.618 [2024-12-09 14:13:57.313955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.591 ms 00:20:55.618 [2024-12-09 14:13:57.313963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.618 [2024-12-09 14:13:57.314634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.618 [2024-12-09 14:13:57.314662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.618 [2024-12-09 14:13:57.314673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:20:55.618 [2024-12-09 14:13:57.314680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.618 [2024-12-09 14:13:57.379880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.618 [2024-12-09 14:13:57.379935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:55.618 [2024-12-09 14:13:57.379951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.171 ms 00:20:55.618 [2024-12-09 14:13:57.379961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.618 [2024-12-09 14:13:57.391322] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:55.880 [2024-12-09 14:13:57.410485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.880 [2024-12-09 14:13:57.410530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.880 [2024-12-09 14:13:57.410558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.427 ms 00:20:55.880 [2024-12-09 14:13:57.410573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.880 [2024-12-09 14:13:57.410673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.880 [2024-12-09 14:13:57.410685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:55.880 [2024-12-09 14:13:57.410695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:55.880 [2024-12-09 14:13:57.410703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.880 [2024-12-09 14:13:57.410765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.880 [2024-12-09 14:13:57.410775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.880 [2024-12-09 14:13:57.410784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:55.880 [2024-12-09 14:13:57.410797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.880 [2024-12-09 14:13:57.410830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.880 [2024-12-09 14:13:57.410839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.880 [2024-12-09 14:13:57.410848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:55.880 [2024-12-09 14:13:57.410856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.880 [2024-12-09 14:13:57.410895] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:55.880 [2024-12-09 14:13:57.410906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.880 [2024-12-09 14:13:57.410915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:55.880 [2024-12-09 14:13:57.410923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:55.880 [2024-12-09 14:13:57.410932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.880 [2024-12-09 14:13:57.437490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.880 [2024-12-09 14:13:57.437546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.880 [2024-12-09 14:13:57.437561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.537 ms 00:20:55.880 [2024-12-09 14:13:57.437570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.880 [2024-12-09 14:13:57.437720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.880 [2024-12-09 14:13:57.437734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.880 [2024-12-09 14:13:57.437743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:55.880 [2024-12-09 14:13:57.437751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.880 [2024-12-09 14:13:57.438942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.880 [2024-12-09 14:13:57.442430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.240 ms, result 0 00:20:55.880 [2024-12-09 14:13:57.443862] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:55.880 [2024-12-09 14:13:57.457360] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:56.866  [2024-12-09T14:13:59.605Z] Copying: 17/256 [MB] (17 MBps) [2024-12-09T14:14:00.551Z] Copying: 35/256 [MB] (17 MBps) [2024-12-09T14:14:01.940Z] Copying: 55/256 [MB] (20 MBps) [2024-12-09T14:14:02.512Z] Copying: 76/256 [MB] (20 MBps) [2024-12-09T14:14:03.899Z] Copying: 93/256 [MB] (17 MBps) [2024-12-09T14:14:04.844Z] Copying: 114/256 [MB] (21 MBps) [2024-12-09T14:14:05.788Z] Copying: 128/256 [MB] (13 MBps) [2024-12-09T14:14:06.732Z] Copying: 142/256 [MB] (13 MBps) [2024-12-09T14:14:07.675Z] Copying: 154/256 [MB] (12 MBps) [2024-12-09T14:14:08.619Z] Copying: 178/256 [MB] (24 MBps) [2024-12-09T14:14:09.562Z] Copying: 195/256 [MB] (16 MBps) [2024-12-09T14:14:10.537Z] Copying: 215/256 [MB] (19 MBps) [2024-12-09T14:14:11.924Z] Copying: 231/256 [MB] (15 MBps) [2024-12-09T14:14:12.868Z] Copying: 243/256 [MB] (12 MBps) [2024-12-09T14:14:12.868Z] Copying: 254/256 [MB] (10 MBps) [2024-12-09T14:14:13.130Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-09 14:14:12.990485] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:11.336 [2024-12-09 14:14:13.006036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.006073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:11.336 [2024-12-09 14:14:13.006092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:11.336 [2024-12-09 14:14:13.006100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.336 [2024-12-09 14:14:13.006390] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:11.336 [2024-12-09 14:14:13.008951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.008978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:11.336 [2024-12-09 14:14:13.008988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.547 ms 00:21:11.336 [2024-12-09 14:14:13.008996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.336 [2024-12-09 14:14:13.009266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.009277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:11.336 [2024-12-09 14:14:13.009285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:21:11.336 [2024-12-09 14:14:13.009292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.336 [2024-12-09 14:14:13.013332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.013358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:11.336 [2024-12-09 14:14:13.013368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.021 ms 00:21:11.336 [2024-12-09 14:14:13.013376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.336 [2024-12-09 14:14:13.020245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.020270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:11.336 [2024-12-09 14:14:13.020279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:21:11.336 [2024-12-09 14:14:13.020286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.336 [2024-12-09 14:14:13.043742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.043864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:11.336 [2024-12-09 14:14:13.043880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.402 ms 00:21:11.336 [2024-12-09 14:14:13.043887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.336 [2024-12-09 14:14:13.057361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.057392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:11.336 [2024-12-09 14:14:13.057407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.443 ms 00:21:11.336 [2024-12-09 14:14:13.057414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.336 [2024-12-09 14:14:13.057562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.057573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:11.336 [2024-12-09 14:14:13.057587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:11.336 [2024-12-09 14:14:13.057594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.336 [2024-12-09 14:14:13.080975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.081004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:11.336 [2024-12-09 14:14:13.081013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.364 ms 00:21:11.336 [2024-12-09 14:14:13.081021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.336 [2024-12-09 14:14:13.104328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.104356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:11.336 [2024-12-09 14:14:13.104365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.276 ms 00:21:11.336 [2024-12-09 14:14:13.104372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.336 [2024-12-09 14:14:13.126693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.336 [2024-12-09 14:14:13.126720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:11.336 [2024-12-09 14:14:13.126730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.288 ms 00:21:11.336 [2024-12-09 14:14:13.126738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.599 [2024-12-09 14:14:13.149208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.599 [2024-12-09 14:14:13.149236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:11.599 [2024-12-09 14:14:13.149246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.412 ms 00:21:11.599 [2024-12-09 14:14:13.149253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.599 [2024-12-09 14:14:13.149284] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:11.599 [2024-12-09 14:14:13.149297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:11.599 [2024-12-09 14:14:13.149859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.149995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:11.600 [2024-12-09 14:14:13.150089] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:11.600 [2024-12-09 14:14:13.150096] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 34e99555-49f3-4d3d-b544-1318be7f7bb8 00:21:11.600 [2024-12-09 14:14:13.150104] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:11.600 [2024-12-09 14:14:13.150111] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:11.600 [2024-12-09 14:14:13.150119] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:11.600 [2024-12-09 14:14:13.150126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:11.600 [2024-12-09 14:14:13.150133] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:11.600 [2024-12-09 14:14:13.150140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:11.600 [2024-12-09 14:14:13.150149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:11.600 [2024-12-09 14:14:13.150156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:11.600 [2024-12-09 14:14:13.150162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:11.600 [2024-12-09 14:14:13.150169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.600 [2024-12-09 14:14:13.150176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:11.600 [2024-12-09 14:14:13.150184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:21:11.600 [2024-12-09 14:14:13.150191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.162411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.600 [2024-12-09 14:14:13.162436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:11.600 [2024-12-09 14:14:13.162445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.203 ms 00:21:11.600 [2024-12-09 14:14:13.162452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.162818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.600 [2024-12-09 14:14:13.162828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:11.600 [2024-12-09 14:14:13.162836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:21:11.600 [2024-12-09 14:14:13.162843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.197522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.197656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.600 [2024-12-09 14:14:13.197671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.197683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.197763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.197772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.600 [2024-12-09 14:14:13.197780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.197787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.197831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.197840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.600 [2024-12-09 14:14:13.197847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.197854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.197874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.197881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.600 [2024-12-09 14:14:13.197891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.197897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.274644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.274679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.600 [2024-12-09 14:14:13.274690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.274697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.337341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.337480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.600 [2024-12-09 14:14:13.337495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.337503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.337578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.337589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.600 [2024-12-09 14:14:13.337597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.337604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.337632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.337644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.600 [2024-12-09 14:14:13.337652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.337660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.337746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.337756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.600 [2024-12-09 14:14:13.337763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.337771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.337801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.337810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:11.600 [2024-12-09 14:14:13.337820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.337827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.337863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.337872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.600 [2024-12-09 14:14:13.337880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.337887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.337925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.600 [2024-12-09 14:14:13.337937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.600 [2024-12-09 14:14:13.337944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.600 [2024-12-09 14:14:13.337952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.600 [2024-12-09 14:14:13.338075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.042 ms, result 0 00:21:12.544 00:21:12.544 00:21:12.544 14:14:14 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:12.806 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:12.806 14:14:14 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:12.806 14:14:14 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:12.806 14:14:14 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:12.806 14:14:14 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:12.806 14:14:14 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:13.068 14:14:14 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:13.068 Process with pid 76991 is not found 00:21:13.068 14:14:14 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76991 00:21:13.068 14:14:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76991 ']' 00:21:13.068 14:14:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76991 00:21:13.068 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76991) - No such process 00:21:13.068 14:14:14 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76991 is not found' 00:21:13.068 00:21:13.068 real 1m14.751s 00:21:13.068 user 1m35.964s 00:21:13.068 sys 0m5.655s 00:21:13.068 ************************************ 00:21:13.068 END TEST ftl_trim 00:21:13.068 ************************************ 00:21:13.068 14:14:14 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.068 14:14:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:13.068 14:14:14 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:13.068 14:14:14 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:13.068 14:14:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.068 14:14:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:13.068 ************************************ 00:21:13.068 START TEST ftl_restore 00:21:13.068 ************************************ 00:21:13.068 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:13.068 * Looking for test storage... 00:21:13.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:13.068 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:13.068 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:21:13.068 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:13.330 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.330 14:14:14 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:13.330 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.330 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:13.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.330 --rc genhtml_branch_coverage=1 00:21:13.330 --rc genhtml_function_coverage=1 00:21:13.330 --rc genhtml_legend=1 00:21:13.330 --rc geninfo_all_blocks=1 00:21:13.330 --rc geninfo_unexecuted_blocks=1 00:21:13.330 00:21:13.330 ' 00:21:13.330 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:13.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.330 --rc genhtml_branch_coverage=1 00:21:13.330 --rc genhtml_function_coverage=1 00:21:13.330 --rc genhtml_legend=1 00:21:13.330 --rc geninfo_all_blocks=1 00:21:13.330 --rc geninfo_unexecuted_blocks=1 00:21:13.330 00:21:13.330 ' 00:21:13.330 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:13.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.330 --rc genhtml_branch_coverage=1 00:21:13.330 --rc genhtml_function_coverage=1 00:21:13.330 --rc genhtml_legend=1 00:21:13.330 --rc geninfo_all_blocks=1 00:21:13.330 --rc geninfo_unexecuted_blocks=1 00:21:13.330 00:21:13.330 ' 00:21:13.330 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:13.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.330 --rc genhtml_branch_coverage=1 00:21:13.330 --rc genhtml_function_coverage=1 00:21:13.330 --rc genhtml_legend=1 00:21:13.330 --rc geninfo_all_blocks=1 00:21:13.331 --rc geninfo_unexecuted_blocks=1 00:21:13.331 00:21:13.331 ' 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.nZCqoPjXep 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77297 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77297 00:21:13.331 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77297 ']' 00:21:13.331 14:14:14 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.331 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.331 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.331 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.331 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.331 14:14:14 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:13.331 [2024-12-09 14:14:14.984333] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:21:13.331 [2024-12-09 14:14:14.984778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77297 ] 00:21:13.592 [2024-12-09 14:14:15.147110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.592 [2024-12-09 14:14:15.272009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.164 14:14:15 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.435 14:14:15 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:14.435 14:14:15 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:14.435 14:14:15 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:14.435 14:14:15 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:14.435 14:14:15 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:14.435 14:14:15 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:14.435 14:14:15 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:14.702 14:14:16 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:14.702 14:14:16 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:14.702 14:14:16 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:14.702 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:14.702 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:14.702 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:14.702 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:14.702 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:14.964 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:14.964 { 00:21:14.964 "name": "nvme0n1", 00:21:14.964 "aliases": [ 00:21:14.964 "a5fb5ffc-e1ae-4056-abb6-aea0630b52ba" 00:21:14.964 ], 00:21:14.964 "product_name": "NVMe disk", 00:21:14.964 "block_size": 4096, 00:21:14.964 "num_blocks": 1310720, 00:21:14.964 "uuid": "a5fb5ffc-e1ae-4056-abb6-aea0630b52ba", 00:21:14.964 "numa_id": -1, 00:21:14.964 "assigned_rate_limits": { 00:21:14.964 "rw_ios_per_sec": 0, 00:21:14.964 "rw_mbytes_per_sec": 0, 00:21:14.964 "r_mbytes_per_sec": 0, 00:21:14.964 "w_mbytes_per_sec": 0 00:21:14.964 }, 00:21:14.964 "claimed": true, 00:21:14.964 "claim_type": "read_many_write_one", 00:21:14.964 "zoned": false, 00:21:14.964 "supported_io_types": { 00:21:14.964 "read": true, 00:21:14.964 "write": true, 00:21:14.964 "unmap": true, 00:21:14.964 "flush": true, 00:21:14.964 "reset": true, 00:21:14.964 "nvme_admin": true, 00:21:14.964 "nvme_io": true, 00:21:14.964 "nvme_io_md": false, 00:21:14.964 "write_zeroes": true, 00:21:14.964 "zcopy": false, 00:21:14.964 "get_zone_info": false, 00:21:14.964 "zone_management": false, 00:21:14.964 "zone_append": false, 00:21:14.964 "compare": true, 00:21:14.964 "compare_and_write": false, 00:21:14.964 "abort": true, 00:21:14.964 "seek_hole": false, 00:21:14.964 "seek_data": false, 00:21:14.964 "copy": true, 00:21:14.964 "nvme_iov_md": false 00:21:14.964 }, 00:21:14.964 "driver_specific": { 00:21:14.964 "nvme": [ 00:21:14.964 { 00:21:14.964 "pci_address": "0000:00:11.0", 00:21:14.964 "trid": { 00:21:14.964 "trtype": "PCIe", 00:21:14.964 "traddr": "0000:00:11.0" 00:21:14.964 }, 00:21:14.964 "ctrlr_data": { 00:21:14.964 "cntlid": 0, 00:21:14.964 "vendor_id": "0x1b36", 00:21:14.964 "model_number": "QEMU NVMe Ctrl", 00:21:14.964 "serial_number": "12341", 00:21:14.964 "firmware_revision": "8.0.0", 00:21:14.964 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:14.964 "oacs": { 00:21:14.964 "security": 0, 00:21:14.964 "format": 1, 00:21:14.964 "firmware": 0, 00:21:14.964 "ns_manage": 1 00:21:14.964 }, 00:21:14.964 "multi_ctrlr": false, 00:21:14.964 "ana_reporting": false 00:21:14.964 }, 00:21:14.964 "vs": { 00:21:14.964 "nvme_version": "1.4" 00:21:14.964 }, 00:21:14.964 "ns_data": { 00:21:14.964 "id": 1, 00:21:14.964 "can_share": false 00:21:14.964 } 00:21:14.964 } 00:21:14.964 ], 00:21:14.964 "mp_policy": "active_passive" 00:21:14.964 } 00:21:14.964 } 00:21:14.964 ]' 00:21:14.964 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:14.964 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:14.964 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:14.964 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:14.964 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:14.964 14:14:16 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:14.964 14:14:16 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:14.964 14:14:16 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:14.964 14:14:16 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:14.964 14:14:16 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:14.964 14:14:16 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:15.225 14:14:16 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7fff9980-7f60-4cfc-9a92-05e41fbdd885 00:21:15.225 14:14:16 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:15.225 14:14:16 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fff9980-7f60-4cfc-9a92-05e41fbdd885 00:21:15.487 14:14:17 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:15.748 14:14:17 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=f29795b2-f781-4836-b1e0-de9a6d56367e 00:21:15.748 14:14:17 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f29795b2-f781-4836-b1e0-de9a6d56367e 00:21:15.748 14:14:17 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=4af8845b-1b32-4a67-8595-e1a816a82988 00:21:15.748 14:14:17 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:15.748 14:14:17 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4af8845b-1b32-4a67-8595-e1a816a82988 00:21:15.748 14:14:17 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:15.748 14:14:17 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:15.748 14:14:17 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=4af8845b-1b32-4a67-8595-e1a816a82988 00:21:15.748 14:14:17 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:15.748 14:14:17 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 4af8845b-1b32-4a67-8595-e1a816a82988 00:21:15.748 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4af8845b-1b32-4a67-8595-e1a816a82988 00:21:15.748 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:15.748 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:15.748 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:15.748 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4af8845b-1b32-4a67-8595-e1a816a82988 00:21:16.009 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:16.009 { 00:21:16.009 "name": "4af8845b-1b32-4a67-8595-e1a816a82988", 00:21:16.009 "aliases": [ 00:21:16.009 "lvs/nvme0n1p0" 00:21:16.009 ], 00:21:16.009 "product_name": "Logical Volume", 00:21:16.009 "block_size": 4096, 00:21:16.009 "num_blocks": 26476544, 00:21:16.009 "uuid": "4af8845b-1b32-4a67-8595-e1a816a82988", 00:21:16.009 "assigned_rate_limits": { 00:21:16.009 "rw_ios_per_sec": 0, 00:21:16.009 "rw_mbytes_per_sec": 0, 00:21:16.009 "r_mbytes_per_sec": 0, 00:21:16.009 "w_mbytes_per_sec": 0 00:21:16.009 }, 00:21:16.009 "claimed": false, 00:21:16.009 "zoned": false, 00:21:16.009 "supported_io_types": { 00:21:16.009 "read": true, 00:21:16.009 "write": true, 00:21:16.009 "unmap": true, 00:21:16.009 "flush": false, 00:21:16.009 "reset": true, 00:21:16.009 "nvme_admin": false, 00:21:16.009 "nvme_io": false, 00:21:16.009 "nvme_io_md": false, 00:21:16.009 "write_zeroes": true, 00:21:16.009 "zcopy": false, 00:21:16.009 "get_zone_info": false, 00:21:16.009 "zone_management": false, 00:21:16.009 "zone_append": false, 00:21:16.009 "compare": false, 00:21:16.009 "compare_and_write": false, 00:21:16.009 "abort": false, 00:21:16.009 "seek_hole": true, 00:21:16.009 "seek_data": true, 00:21:16.009 "copy": false, 00:21:16.009 "nvme_iov_md": false 00:21:16.009 }, 00:21:16.009 "driver_specific": { 00:21:16.009 "lvol": { 00:21:16.009 "lvol_store_uuid": "f29795b2-f781-4836-b1e0-de9a6d56367e", 00:21:16.009 "base_bdev": "nvme0n1", 00:21:16.009 "thin_provision": true, 00:21:16.009 "num_allocated_clusters": 0, 00:21:16.009 "snapshot": false, 00:21:16.009 "clone": false, 00:21:16.009 "esnap_clone": false 00:21:16.009 } 00:21:16.009 } 00:21:16.009 } 00:21:16.009 ]' 00:21:16.009 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:16.009 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:16.009 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:16.271 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:16.271 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:16.271 14:14:17 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:16.271 14:14:17 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:16.271 14:14:17 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:16.271 14:14:17 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:16.533 14:14:18 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:16.533 14:14:18 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:16.533 14:14:18 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 4af8845b-1b32-4a67-8595-e1a816a82988 00:21:16.533 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4af8845b-1b32-4a67-8595-e1a816a82988 00:21:16.533 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:16.533 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:16.533 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:16.533 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4af8845b-1b32-4a67-8595-e1a816a82988 00:21:16.533 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:16.533 { 00:21:16.533 "name": "4af8845b-1b32-4a67-8595-e1a816a82988", 00:21:16.533 "aliases": [ 00:21:16.533 "lvs/nvme0n1p0" 00:21:16.533 ], 00:21:16.533 "product_name": "Logical Volume", 00:21:16.533 "block_size": 4096, 00:21:16.533 "num_blocks": 26476544, 00:21:16.533 "uuid": "4af8845b-1b32-4a67-8595-e1a816a82988", 00:21:16.533 "assigned_rate_limits": { 00:21:16.533 "rw_ios_per_sec": 0, 00:21:16.533 "rw_mbytes_per_sec": 0, 00:21:16.533 "r_mbytes_per_sec": 0, 00:21:16.533 "w_mbytes_per_sec": 0 00:21:16.533 }, 00:21:16.533 "claimed": false, 00:21:16.533 "zoned": false, 00:21:16.533 "supported_io_types": { 00:21:16.533 "read": true, 00:21:16.533 "write": true, 00:21:16.533 "unmap": true, 00:21:16.533 "flush": false, 00:21:16.533 "reset": true, 00:21:16.533 "nvme_admin": false, 00:21:16.533 "nvme_io": false, 00:21:16.533 "nvme_io_md": false, 00:21:16.533 "write_zeroes": true, 00:21:16.533 "zcopy": false, 00:21:16.533 "get_zone_info": false, 00:21:16.533 "zone_management": false, 00:21:16.533 "zone_append": false, 00:21:16.533 "compare": false, 00:21:16.533 "compare_and_write": false, 00:21:16.533 "abort": false, 00:21:16.533 "seek_hole": true, 00:21:16.533 "seek_data": true, 00:21:16.533 "copy": false, 00:21:16.533 "nvme_iov_md": false 00:21:16.533 }, 00:21:16.533 "driver_specific": { 00:21:16.533 "lvol": { 00:21:16.533 "lvol_store_uuid": "f29795b2-f781-4836-b1e0-de9a6d56367e", 00:21:16.533 "base_bdev": "nvme0n1", 00:21:16.533 "thin_provision": true, 00:21:16.533 "num_allocated_clusters": 0, 00:21:16.533 "snapshot": false, 00:21:16.533 "clone": false, 00:21:16.533 "esnap_clone": false 00:21:16.533 } 00:21:16.533 } 00:21:16.533 } 00:21:16.533 ]' 00:21:16.533 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:16.533 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:16.533 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:16.794 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:16.794 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:16.794 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:16.794 14:14:18 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:16.794 14:14:18 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:16.794 14:14:18 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:16.794 14:14:18 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 4af8845b-1b32-4a67-8595-e1a816a82988 00:21:16.794 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4af8845b-1b32-4a67-8595-e1a816a82988 00:21:16.795 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:16.795 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:16.795 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:16.795 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4af8845b-1b32-4a67-8595-e1a816a82988 00:21:17.053 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:17.053 { 00:21:17.053 "name": "4af8845b-1b32-4a67-8595-e1a816a82988", 00:21:17.053 "aliases": [ 00:21:17.053 "lvs/nvme0n1p0" 00:21:17.053 ], 00:21:17.053 "product_name": "Logical Volume", 00:21:17.053 "block_size": 4096, 00:21:17.053 "num_blocks": 26476544, 00:21:17.053 "uuid": "4af8845b-1b32-4a67-8595-e1a816a82988", 00:21:17.053 "assigned_rate_limits": { 00:21:17.053 "rw_ios_per_sec": 0, 00:21:17.053 "rw_mbytes_per_sec": 0, 00:21:17.053 "r_mbytes_per_sec": 0, 00:21:17.053 "w_mbytes_per_sec": 0 00:21:17.053 }, 00:21:17.053 "claimed": false, 00:21:17.053 "zoned": false, 00:21:17.053 "supported_io_types": { 00:21:17.053 "read": true, 00:21:17.053 "write": true, 00:21:17.053 "unmap": true, 00:21:17.053 "flush": false, 00:21:17.053 "reset": true, 00:21:17.053 "nvme_admin": false, 00:21:17.053 "nvme_io": false, 00:21:17.053 "nvme_io_md": false, 00:21:17.053 "write_zeroes": true, 00:21:17.053 "zcopy": false, 00:21:17.053 "get_zone_info": false, 00:21:17.053 "zone_management": false, 00:21:17.053 "zone_append": false, 00:21:17.053 "compare": false, 00:21:17.053 "compare_and_write": false, 00:21:17.053 "abort": false, 00:21:17.053 "seek_hole": true, 00:21:17.053 "seek_data": true, 00:21:17.053 "copy": false, 00:21:17.053 "nvme_iov_md": false 00:21:17.053 }, 00:21:17.053 "driver_specific": { 00:21:17.053 "lvol": { 00:21:17.053 "lvol_store_uuid": "f29795b2-f781-4836-b1e0-de9a6d56367e", 00:21:17.053 "base_bdev": "nvme0n1", 00:21:17.053 "thin_provision": true, 00:21:17.053 "num_allocated_clusters": 0, 00:21:17.053 "snapshot": false, 00:21:17.053 "clone": false, 00:21:17.053 "esnap_clone": false 00:21:17.053 } 00:21:17.053 } 00:21:17.053 } 00:21:17.053 ]' 00:21:17.053 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:17.053 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:17.053 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:17.053 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:17.053 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:17.053 14:14:18 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:17.053 14:14:18 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:17.053 14:14:18 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4af8845b-1b32-4a67-8595-e1a816a82988 --l2p_dram_limit 10' 00:21:17.053 14:14:18 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:17.053 14:14:18 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:17.053 14:14:18 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:17.053 14:14:18 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:17.053 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:17.053 14:14:18 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4af8845b-1b32-4a67-8595-e1a816a82988 --l2p_dram_limit 10 -c nvc0n1p0 00:21:17.314 [2024-12-09 14:14:19.017633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.314 [2024-12-09 14:14:19.017671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:17.314 [2024-12-09 14:14:19.017683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:17.314 [2024-12-09 14:14:19.017689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.314 [2024-12-09 14:14:19.017738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.314 [2024-12-09 14:14:19.017745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:17.314 [2024-12-09 14:14:19.017753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:17.314 [2024-12-09 14:14:19.017759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.314 [2024-12-09 14:14:19.017779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:17.314 [2024-12-09 14:14:19.018334] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:17.314 [2024-12-09 14:14:19.018355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.314 [2024-12-09 14:14:19.018361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:17.314 [2024-12-09 14:14:19.018370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:21:17.314 [2024-12-09 14:14:19.018375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.315 [2024-12-09 14:14:19.018428] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 05373d07-4b4e-457d-b347-de8cd136f1a9 00:21:17.315 [2024-12-09 14:14:19.019356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.315 [2024-12-09 14:14:19.019384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:17.315 [2024-12-09 14:14:19.019391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:17.315 [2024-12-09 14:14:19.019399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.315 [2024-12-09 14:14:19.023994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.315 [2024-12-09 14:14:19.024025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:17.315 [2024-12-09 14:14:19.024033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.562 ms 00:21:17.315 [2024-12-09 14:14:19.024041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.315 [2024-12-09 14:14:19.024107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.315 [2024-12-09 14:14:19.024116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:17.315 [2024-12-09 14:14:19.024122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:17.315 [2024-12-09 14:14:19.024131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.315 [2024-12-09 14:14:19.024168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.315 [2024-12-09 14:14:19.024177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:17.315 [2024-12-09 14:14:19.024185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:17.315 [2024-12-09 14:14:19.024191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.315 [2024-12-09 14:14:19.024207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:17.315 [2024-12-09 14:14:19.027052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.315 [2024-12-09 14:14:19.027076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:17.315 [2024-12-09 14:14:19.027086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.847 ms 00:21:17.315 [2024-12-09 14:14:19.027092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.315 [2024-12-09 14:14:19.027120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.315 [2024-12-09 14:14:19.027127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:17.315 [2024-12-09 14:14:19.027134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:17.315 [2024-12-09 14:14:19.027140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.315 [2024-12-09 14:14:19.027159] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:17.315 [2024-12-09 14:14:19.027267] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:17.315 [2024-12-09 14:14:19.027279] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:17.315 [2024-12-09 14:14:19.027287] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:17.315 [2024-12-09 14:14:19.027296] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027303] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027311] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:17.315 [2024-12-09 14:14:19.027316] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:17.315 [2024-12-09 14:14:19.027325] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:17.315 [2024-12-09 14:14:19.027330] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:17.315 [2024-12-09 14:14:19.027338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.315 [2024-12-09 14:14:19.027348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:17.315 [2024-12-09 14:14:19.027355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:21:17.315 [2024-12-09 14:14:19.027360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.315 [2024-12-09 14:14:19.027427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.315 [2024-12-09 14:14:19.027433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:17.315 [2024-12-09 14:14:19.027440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:17.315 [2024-12-09 14:14:19.027446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.315 [2024-12-09 14:14:19.027523] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:17.315 [2024-12-09 14:14:19.027530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:17.315 [2024-12-09 14:14:19.027547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:17.315 [2024-12-09 14:14:19.027566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:17.315 [2024-12-09 14:14:19.027584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:17.315 [2024-12-09 14:14:19.027596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:17.315 [2024-12-09 14:14:19.027602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:17.315 [2024-12-09 14:14:19.027607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:17.315 [2024-12-09 14:14:19.027612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:17.315 [2024-12-09 14:14:19.027619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:17.315 [2024-12-09 14:14:19.027624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:17.315 [2024-12-09 14:14:19.027637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:17.315 [2024-12-09 14:14:19.027655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:17.315 [2024-12-09 14:14:19.027671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:17.315 [2024-12-09 14:14:19.027690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:17.315 [2024-12-09 14:14:19.027706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:17.315 [2024-12-09 14:14:19.027725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:17.315 [2024-12-09 14:14:19.027736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:17.315 [2024-12-09 14:14:19.027741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:17.315 [2024-12-09 14:14:19.027748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:17.315 [2024-12-09 14:14:19.027753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:17.315 [2024-12-09 14:14:19.027759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:17.315 [2024-12-09 14:14:19.027764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:17.315 [2024-12-09 14:14:19.027776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:17.315 [2024-12-09 14:14:19.027782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027787] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:17.315 [2024-12-09 14:14:19.027793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:17.315 [2024-12-09 14:14:19.027799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.315 [2024-12-09 14:14:19.027811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:17.315 [2024-12-09 14:14:19.027818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:17.315 [2024-12-09 14:14:19.027823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:17.315 [2024-12-09 14:14:19.027829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:17.315 [2024-12-09 14:14:19.027834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:17.315 [2024-12-09 14:14:19.027840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:17.315 [2024-12-09 14:14:19.027846] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:17.315 [2024-12-09 14:14:19.027856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:17.315 [2024-12-09 14:14:19.027863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:17.316 [2024-12-09 14:14:19.027870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:17.316 [2024-12-09 14:14:19.027876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:17.316 [2024-12-09 14:14:19.027883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:17.316 [2024-12-09 14:14:19.027889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:17.316 [2024-12-09 14:14:19.027895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:17.316 [2024-12-09 14:14:19.027901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:17.316 [2024-12-09 14:14:19.027909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:17.316 [2024-12-09 14:14:19.027914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:17.316 [2024-12-09 14:14:19.027922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:17.316 [2024-12-09 14:14:19.027927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:17.316 [2024-12-09 14:14:19.027934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:17.316 [2024-12-09 14:14:19.027939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:17.316 [2024-12-09 14:14:19.027946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:17.316 [2024-12-09 14:14:19.027951] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:17.316 [2024-12-09 14:14:19.027959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:17.316 [2024-12-09 14:14:19.027965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:17.316 [2024-12-09 14:14:19.027972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:17.316 [2024-12-09 14:14:19.027977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:17.316 [2024-12-09 14:14:19.027984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:17.316 [2024-12-09 14:14:19.027990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.316 [2024-12-09 14:14:19.027997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:17.316 [2024-12-09 14:14:19.028002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:21:17.316 [2024-12-09 14:14:19.028009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.316 [2024-12-09 14:14:19.028042] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:17.316 [2024-12-09 14:14:19.028052] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:21.521 [2024-12-09 14:14:22.774128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.774219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:21.522 [2024-12-09 14:14:22.774240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3746.068 ms 00:21:21.522 [2024-12-09 14:14:22.774253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:22.806736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.806805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:21.522 [2024-12-09 14:14:22.806820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.230 ms 00:21:21.522 [2024-12-09 14:14:22.806832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:22.806981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.806997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:21.522 [2024-12-09 14:14:22.807006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:21.522 [2024-12-09 14:14:22.807023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:22.842630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.842690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:21.522 [2024-12-09 14:14:22.842702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.568 ms 00:21:21.522 [2024-12-09 14:14:22.842713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:22.842749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.842764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:21.522 [2024-12-09 14:14:22.842774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:21.522 [2024-12-09 14:14:22.842793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:22.843414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.843444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:21.522 [2024-12-09 14:14:22.843456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:21:21.522 [2024-12-09 14:14:22.843467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:22.843607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.843621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:21.522 [2024-12-09 14:14:22.843634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:21:21.522 [2024-12-09 14:14:22.843647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:22.861292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.861346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:21.522 [2024-12-09 14:14:22.861359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.624 ms 00:21:21.522 [2024-12-09 14:14:22.861370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:22.886816] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:21.522 [2024-12-09 14:14:22.891090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.891144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:21.522 [2024-12-09 14:14:22.891162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.627 ms 00:21:21.522 [2024-12-09 14:14:22.891172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:22.993266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.993341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:21.522 [2024-12-09 14:14:22.993362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.036 ms 00:21:21.522 [2024-12-09 14:14:22.993371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:22.993617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:22.993630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:21.522 [2024-12-09 14:14:22.993646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:21:21.522 [2024-12-09 14:14:22.993654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:23.020314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:23.020369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:21.522 [2024-12-09 14:14:23.020387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.597 ms 00:21:21.522 [2024-12-09 14:14:23.020399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:23.046174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:23.046224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:21.522 [2024-12-09 14:14:23.046240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.714 ms 00:21:21.522 [2024-12-09 14:14:23.046248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:23.046884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:23.046897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:21.522 [2024-12-09 14:14:23.046913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:21:21.522 [2024-12-09 14:14:23.046920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:23.132412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:23.132472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:21.522 [2024-12-09 14:14:23.132494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.411 ms 00:21:21.522 [2024-12-09 14:14:23.132503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:23.159976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:23.160032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:21.522 [2024-12-09 14:14:23.160049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.353 ms 00:21:21.522 [2024-12-09 14:14:23.160058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:23.185995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:23.186048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:21.522 [2024-12-09 14:14:23.186063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:21:21.522 [2024-12-09 14:14:23.186071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:23.212939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:23.212990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:21.522 [2024-12-09 14:14:23.213007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.809 ms 00:21:21.522 [2024-12-09 14:14:23.213014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:23.213073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:23.213084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:21.522 [2024-12-09 14:14:23.213100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:21.522 [2024-12-09 14:14:23.213108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:23.213230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.522 [2024-12-09 14:14:23.213245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:21.522 [2024-12-09 14:14:23.213256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:21.522 [2024-12-09 14:14:23.213264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.522 [2024-12-09 14:14:23.214441] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4196.291 ms, result 0 00:21:21.522 { 00:21:21.522 "name": "ftl0", 00:21:21.522 "uuid": "05373d07-4b4e-457d-b347-de8cd136f1a9" 00:21:21.522 } 00:21:21.522 14:14:23 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:21.522 14:14:23 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:21.782 14:14:23 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:21.782 14:14:23 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:22.044 [2024-12-09 14:14:23.653785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.653857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.044 [2024-12-09 14:14:23.653872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:22.044 [2024-12-09 14:14:23.653884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.653911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.044 [2024-12-09 14:14:23.657043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.657087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.044 [2024-12-09 14:14:23.657102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.109 ms 00:21:22.044 [2024-12-09 14:14:23.657111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.657407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.657420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.044 [2024-12-09 14:14:23.657432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:21:22.044 [2024-12-09 14:14:23.657441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.660693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.660716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.044 [2024-12-09 14:14:23.660729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:21:22.044 [2024-12-09 14:14:23.660738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.667064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.667109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.044 [2024-12-09 14:14:23.667123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.300 ms 00:21:22.044 [2024-12-09 14:14:23.667131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.694055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.694114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.044 [2024-12-09 14:14:23.694132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.830 ms 00:21:22.044 [2024-12-09 14:14:23.694140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.711723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.711777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.044 [2024-12-09 14:14:23.711795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.521 ms 00:21:22.044 [2024-12-09 14:14:23.711805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.711985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.711999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.044 [2024-12-09 14:14:23.712012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:21:22.044 [2024-12-09 14:14:23.712025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.738397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.738445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:22.044 [2024-12-09 14:14:23.738461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.350 ms 00:21:22.044 [2024-12-09 14:14:23.738468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.764020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.764069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:22.044 [2024-12-09 14:14:23.764083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.495 ms 00:21:22.044 [2024-12-09 14:14:23.764091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.789617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.789667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:22.044 [2024-12-09 14:14:23.789681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.464 ms 00:21:22.044 [2024-12-09 14:14:23.789688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.814932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.044 [2024-12-09 14:14:23.814986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:22.044 [2024-12-09 14:14:23.815001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.124 ms 00:21:22.044 [2024-12-09 14:14:23.815008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.044 [2024-12-09 14:14:23.815061] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:22.044 [2024-12-09 14:14:23.815081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:22.044 [2024-12-09 14:14:23.815093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:22.044 [2024-12-09 14:14:23.815102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:22.044 [2024-12-09 14:14:23.815111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:22.044 [2024-12-09 14:14:23.815120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:22.044 [2024-12-09 14:14:23.815130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:22.044 [2024-12-09 14:14:23.815138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:22.044 [2024-12-09 14:14:23.815151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:22.044 [2024-12-09 14:14:23.815159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:22.044 [2024-12-09 14:14:23.815169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:22.044 [2024-12-09 14:14:23.815177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:22.045 [2024-12-09 14:14:23.815905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.815913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.815922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.815930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.815941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.815949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.815958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.815966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.815976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.815984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.815994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:22.046 [2024-12-09 14:14:23.816011] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:22.046 [2024-12-09 14:14:23.816179] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05373d07-4b4e-457d-b347-de8cd136f1a9 00:21:22.046 [2024-12-09 14:14:23.816188] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:22.046 [2024-12-09 14:14:23.816203] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:22.046 [2024-12-09 14:14:23.816211] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:22.046 [2024-12-09 14:14:23.816220] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:22.046 [2024-12-09 14:14:23.816227] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:22.046 [2024-12-09 14:14:23.816237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:22.046 [2024-12-09 14:14:23.816245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:22.046 [2024-12-09 14:14:23.816254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:22.046 [2024-12-09 14:14:23.816260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:22.046 [2024-12-09 14:14:23.816270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.046 [2024-12-09 14:14:23.816278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:22.046 [2024-12-09 14:14:23.816292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:21:22.046 [2024-12-09 14:14:23.816300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.046 [2024-12-09 14:14:23.830255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.046 [2024-12-09 14:14:23.830300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:22.046 [2024-12-09 14:14:23.830315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.905 ms 00:21:22.046 [2024-12-09 14:14:23.830323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.046 [2024-12-09 14:14:23.830758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.046 [2024-12-09 14:14:23.830773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:22.046 [2024-12-09 14:14:23.830785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:21:22.046 [2024-12-09 14:14:23.830792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.307 [2024-12-09 14:14:23.877440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.307 [2024-12-09 14:14:23.877493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.307 [2024-12-09 14:14:23.877509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.307 [2024-12-09 14:14:23.877518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.307 [2024-12-09 14:14:23.877604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.307 [2024-12-09 14:14:23.877618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.308 [2024-12-09 14:14:23.877629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:23.877637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:23.877748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.308 [2024-12-09 14:14:23.877759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.308 [2024-12-09 14:14:23.877770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:23.877778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:23.877801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.308 [2024-12-09 14:14:23.877809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.308 [2024-12-09 14:14:23.877822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:23.877830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:23.963123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.308 [2024-12-09 14:14:23.963189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.308 [2024-12-09 14:14:23.963205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:23.963214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:24.033508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.308 [2024-12-09 14:14:24.033588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.308 [2024-12-09 14:14:24.033608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:24.033617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:24.033704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.308 [2024-12-09 14:14:24.033714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.308 [2024-12-09 14:14:24.033726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:24.033735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:24.033805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.308 [2024-12-09 14:14:24.033815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.308 [2024-12-09 14:14:24.033827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:24.033837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:24.033945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.308 [2024-12-09 14:14:24.033957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.308 [2024-12-09 14:14:24.033968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:24.033975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:24.034015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.308 [2024-12-09 14:14:24.034025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:22.308 [2024-12-09 14:14:24.034036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:24.034045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:24.034091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.308 [2024-12-09 14:14:24.034101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.308 [2024-12-09 14:14:24.034112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:24.034120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:24.034174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.308 [2024-12-09 14:14:24.034185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.308 [2024-12-09 14:14:24.034197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.308 [2024-12-09 14:14:24.034207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.308 [2024-12-09 14:14:24.034360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.534 ms, result 0 00:21:22.308 true 00:21:22.308 14:14:24 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77297 00:21:22.308 14:14:24 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77297 ']' 00:21:22.308 14:14:24 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77297 00:21:22.308 14:14:24 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:22.308 14:14:24 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.308 14:14:24 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77297 00:21:22.308 killing process with pid 77297 00:21:22.308 14:14:24 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.308 14:14:24 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.308 14:14:24 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77297' 00:21:22.308 14:14:24 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77297 00:21:22.308 14:14:24 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77297 00:21:27.591 14:14:28 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:30.902 262144+0 records in 00:21:30.902 262144+0 records out 00:21:30.902 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.73608 s, 287 MB/s 00:21:30.902 14:14:32 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:32.804 14:14:34 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:32.804 [2024-12-09 14:14:34.343078] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:21:32.804 [2024-12-09 14:14:34.343193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77528 ] 00:21:32.804 [2024-12-09 14:14:34.514602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.065 [2024-12-09 14:14:34.613346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.327 [2024-12-09 14:14:34.870038] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:33.327 [2024-12-09 14:14:34.870105] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:33.327 [2024-12-09 14:14:35.027100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.327 [2024-12-09 14:14:35.027260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:33.327 [2024-12-09 14:14:35.027281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:33.327 [2024-12-09 14:14:35.027289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.027343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.027356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:33.328 [2024-12-09 14:14:35.027364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:33.328 [2024-12-09 14:14:35.027371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.027390] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:33.328 [2024-12-09 14:14:35.028106] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:33.328 [2024-12-09 14:14:35.028122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.028130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:33.328 [2024-12-09 14:14:35.028139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:21:33.328 [2024-12-09 14:14:35.028146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.029205] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:33.328 [2024-12-09 14:14:35.041751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.041879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:33.328 [2024-12-09 14:14:35.041895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.548 ms 00:21:33.328 [2024-12-09 14:14:35.041903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.041954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.041964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:33.328 [2024-12-09 14:14:35.041972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:33.328 [2024-12-09 14:14:35.041979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.046987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.047016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:33.328 [2024-12-09 14:14:35.047025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.960 ms 00:21:33.328 [2024-12-09 14:14:35.047036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.047102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.047111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:33.328 [2024-12-09 14:14:35.047118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:33.328 [2024-12-09 14:14:35.047125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.047173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.047182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:33.328 [2024-12-09 14:14:35.047190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:33.328 [2024-12-09 14:14:35.047197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.047219] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:33.328 [2024-12-09 14:14:35.050405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.050430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:33.328 [2024-12-09 14:14:35.050441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.190 ms 00:21:33.328 [2024-12-09 14:14:35.050448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.050478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.050485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:33.328 [2024-12-09 14:14:35.050493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:33.328 [2024-12-09 14:14:35.050500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.050519] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:33.328 [2024-12-09 14:14:35.050553] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:33.328 [2024-12-09 14:14:35.050587] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:33.328 [2024-12-09 14:14:35.050604] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:33.328 [2024-12-09 14:14:35.050706] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:33.328 [2024-12-09 14:14:35.050716] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:33.328 [2024-12-09 14:14:35.050726] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:33.328 [2024-12-09 14:14:35.050735] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:33.328 [2024-12-09 14:14:35.050744] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:33.328 [2024-12-09 14:14:35.050752] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:33.328 [2024-12-09 14:14:35.050759] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:33.328 [2024-12-09 14:14:35.050769] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:33.328 [2024-12-09 14:14:35.050777] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:33.328 [2024-12-09 14:14:35.050784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.050792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:33.328 [2024-12-09 14:14:35.050799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:21:33.328 [2024-12-09 14:14:35.050806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.050888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.328 [2024-12-09 14:14:35.050896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:33.328 [2024-12-09 14:14:35.050903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:33.328 [2024-12-09 14:14:35.050909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.328 [2024-12-09 14:14:35.051011] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:33.328 [2024-12-09 14:14:35.051021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:33.328 [2024-12-09 14:14:35.051028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:33.328 [2024-12-09 14:14:35.051036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:33.328 [2024-12-09 14:14:35.051050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:33.328 [2024-12-09 14:14:35.051065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:33.328 [2024-12-09 14:14:35.051071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:33.328 [2024-12-09 14:14:35.051085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:33.328 [2024-12-09 14:14:35.051093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:33.328 [2024-12-09 14:14:35.051100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:33.328 [2024-12-09 14:14:35.051113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:33.328 [2024-12-09 14:14:35.051120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:33.328 [2024-12-09 14:14:35.051126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:33.328 [2024-12-09 14:14:35.051139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:33.328 [2024-12-09 14:14:35.051145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:33.328 [2024-12-09 14:14:35.051159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:33.328 [2024-12-09 14:14:35.051172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:33.328 [2024-12-09 14:14:35.051179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:33.328 [2024-12-09 14:14:35.051191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:33.328 [2024-12-09 14:14:35.051198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:33.328 [2024-12-09 14:14:35.051211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:33.328 [2024-12-09 14:14:35.051218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:33.328 [2024-12-09 14:14:35.051231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:33.328 [2024-12-09 14:14:35.051238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:33.328 [2024-12-09 14:14:35.051250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:33.328 [2024-12-09 14:14:35.051257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:33.328 [2024-12-09 14:14:35.051264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:33.328 [2024-12-09 14:14:35.051270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:33.328 [2024-12-09 14:14:35.051276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:33.328 [2024-12-09 14:14:35.051282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.328 [2024-12-09 14:14:35.051289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:33.328 [2024-12-09 14:14:35.051295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:33.328 [2024-12-09 14:14:35.051302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.329 [2024-12-09 14:14:35.051309] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:33.329 [2024-12-09 14:14:35.051316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:33.329 [2024-12-09 14:14:35.051324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:33.329 [2024-12-09 14:14:35.051331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.329 [2024-12-09 14:14:35.051338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:33.329 [2024-12-09 14:14:35.051346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:33.329 [2024-12-09 14:14:35.051352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:33.329 [2024-12-09 14:14:35.051359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:33.329 [2024-12-09 14:14:35.051365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:33.329 [2024-12-09 14:14:35.051372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:33.329 [2024-12-09 14:14:35.051380] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:33.329 [2024-12-09 14:14:35.051388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:33.329 [2024-12-09 14:14:35.051398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:33.329 [2024-12-09 14:14:35.051406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:33.329 [2024-12-09 14:14:35.051412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:33.329 [2024-12-09 14:14:35.051419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:33.329 [2024-12-09 14:14:35.051426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:33.329 [2024-12-09 14:14:35.051433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:33.329 [2024-12-09 14:14:35.051440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:33.329 [2024-12-09 14:14:35.051446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:33.329 [2024-12-09 14:14:35.051453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:33.329 [2024-12-09 14:14:35.051460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:33.329 [2024-12-09 14:14:35.051467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:33.329 [2024-12-09 14:14:35.051474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:33.329 [2024-12-09 14:14:35.051481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:33.329 [2024-12-09 14:14:35.051488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:33.329 [2024-12-09 14:14:35.051496] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:33.329 [2024-12-09 14:14:35.051503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:33.329 [2024-12-09 14:14:35.051512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:33.329 [2024-12-09 14:14:35.051518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:33.329 [2024-12-09 14:14:35.051525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:33.329 [2024-12-09 14:14:35.051533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:33.329 [2024-12-09 14:14:35.051551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.329 [2024-12-09 14:14:35.051559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:33.329 [2024-12-09 14:14:35.051566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:21:33.329 [2024-12-09 14:14:35.051574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.329 [2024-12-09 14:14:35.077285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.329 [2024-12-09 14:14:35.077408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:33.329 [2024-12-09 14:14:35.077423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.657 ms 00:21:33.329 [2024-12-09 14:14:35.077436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.329 [2024-12-09 14:14:35.077519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.329 [2024-12-09 14:14:35.077527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:33.329 [2024-12-09 14:14:35.077545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:33.329 [2024-12-09 14:14:35.077553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.124066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.124102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:33.590 [2024-12-09 14:14:35.124114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.462 ms 00:21:33.590 [2024-12-09 14:14:35.124122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.124160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.124168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:33.590 [2024-12-09 14:14:35.124180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:33.590 [2024-12-09 14:14:35.124187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.124560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.124577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:33.590 [2024-12-09 14:14:35.124586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:21:33.590 [2024-12-09 14:14:35.124593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.124711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.124720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:33.590 [2024-12-09 14:14:35.124733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:21:33.590 [2024-12-09 14:14:35.124741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.137768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.137898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:33.590 [2024-12-09 14:14:35.137914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.007 ms 00:21:33.590 [2024-12-09 14:14:35.137923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.150661] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:33.590 [2024-12-09 14:14:35.150693] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:33.590 [2024-12-09 14:14:35.150705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.150713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:33.590 [2024-12-09 14:14:35.150722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.692 ms 00:21:33.590 [2024-12-09 14:14:35.150729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.175044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.175081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:33.590 [2024-12-09 14:14:35.175091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.279 ms 00:21:33.590 [2024-12-09 14:14:35.175099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.186589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.186697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:33.590 [2024-12-09 14:14:35.186752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.452 ms 00:21:33.590 [2024-12-09 14:14:35.186774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.198186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.198313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:33.590 [2024-12-09 14:14:35.198368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.130 ms 00:21:33.590 [2024-12-09 14:14:35.198391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.199377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.199504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:33.590 [2024-12-09 14:14:35.199576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:21:33.590 [2024-12-09 14:14:35.199593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.254013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.254059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:33.590 [2024-12-09 14:14:35.254072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.399 ms 00:21:33.590 [2024-12-09 14:14:35.254085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.264227] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:33.590 [2024-12-09 14:14:35.266326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.266356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:33.590 [2024-12-09 14:14:35.266367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.202 ms 00:21:33.590 [2024-12-09 14:14:35.266376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.266457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.266468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:33.590 [2024-12-09 14:14:35.266478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:33.590 [2024-12-09 14:14:35.266487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.266572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.266584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:33.590 [2024-12-09 14:14:35.266593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:33.590 [2024-12-09 14:14:35.266602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.266621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.266630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:33.590 [2024-12-09 14:14:35.266638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:33.590 [2024-12-09 14:14:35.266647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.590 [2024-12-09 14:14:35.266676] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:33.590 [2024-12-09 14:14:35.266689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.590 [2024-12-09 14:14:35.266697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:33.591 [2024-12-09 14:14:35.266704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:33.591 [2024-12-09 14:14:35.266712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.591 [2024-12-09 14:14:35.290348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.591 [2024-12-09 14:14:35.290382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:33.591 [2024-12-09 14:14:35.290393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.620 ms 00:21:33.591 [2024-12-09 14:14:35.290404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.591 [2024-12-09 14:14:35.290468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.591 [2024-12-09 14:14:35.290477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:33.591 [2024-12-09 14:14:35.290485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:33.591 [2024-12-09 14:14:35.290493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.591 [2024-12-09 14:14:35.291379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.880 ms, result 0 00:21:34.535  [2024-12-09T14:14:37.716Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-09T14:14:38.654Z] Copying: 33/1024 [MB] (14 MBps) [2024-12-09T14:14:39.590Z] Copying: 49/1024 [MB] (16 MBps) [2024-12-09T14:14:40.531Z] Copying: 102/1024 [MB] (52 MBps) [2024-12-09T14:14:41.468Z] Copying: 130/1024 [MB] (27 MBps) [2024-12-09T14:14:42.412Z] Copying: 177/1024 [MB] (47 MBps) [2024-12-09T14:14:43.348Z] Copying: 200/1024 [MB] (22 MBps) [2024-12-09T14:14:44.730Z] Copying: 233/1024 [MB] (33 MBps) [2024-12-09T14:14:45.676Z] Copying: 254/1024 [MB] (21 MBps) [2024-12-09T14:14:46.619Z] Copying: 275/1024 [MB] (21 MBps) [2024-12-09T14:14:47.563Z] Copying: 301/1024 [MB] (25 MBps) [2024-12-09T14:14:48.504Z] Copying: 325/1024 [MB] (23 MBps) [2024-12-09T14:14:49.447Z] Copying: 348/1024 [MB] (22 MBps) [2024-12-09T14:14:50.391Z] Copying: 369/1024 [MB] (21 MBps) [2024-12-09T14:14:51.399Z] Copying: 386/1024 [MB] (17 MBps) [2024-12-09T14:14:52.343Z] Copying: 406/1024 [MB] (20 MBps) [2024-12-09T14:14:53.730Z] Copying: 428/1024 [MB] (21 MBps) [2024-12-09T14:14:54.674Z] Copying: 449/1024 [MB] (21 MBps) [2024-12-09T14:14:55.616Z] Copying: 469/1024 [MB] (19 MBps) [2024-12-09T14:14:56.558Z] Copying: 499/1024 [MB] (29 MBps) [2024-12-09T14:14:57.498Z] Copying: 524/1024 [MB] (25 MBps) [2024-12-09T14:14:58.439Z] Copying: 541/1024 [MB] (16 MBps) [2024-12-09T14:14:59.379Z] Copying: 559/1024 [MB] (18 MBps) [2024-12-09T14:15:00.320Z] Copying: 578/1024 [MB] (19 MBps) [2024-12-09T14:15:01.701Z] Copying: 590/1024 [MB] (11 MBps) [2024-12-09T14:15:02.642Z] Copying: 608/1024 [MB] (18 MBps) [2024-12-09T14:15:03.583Z] Copying: 650/1024 [MB] (41 MBps) [2024-12-09T14:15:04.524Z] Copying: 672/1024 [MB] (22 MBps) [2024-12-09T14:15:05.462Z] Copying: 696/1024 [MB] (23 MBps) [2024-12-09T14:15:06.404Z] Copying: 719/1024 [MB] (23 MBps) [2024-12-09T14:15:07.348Z] Copying: 741/1024 [MB] (22 MBps) [2024-12-09T14:15:08.728Z] Copying: 764/1024 [MB] (23 MBps) [2024-12-09T14:15:09.669Z] Copying: 796/1024 [MB] (32 MBps) [2024-12-09T14:15:10.640Z] Copying: 819/1024 [MB] (22 MBps) [2024-12-09T14:15:11.585Z] Copying: 859/1024 [MB] (39 MBps) [2024-12-09T14:15:12.526Z] Copying: 879/1024 [MB] (19 MBps) [2024-12-09T14:15:13.467Z] Copying: 901/1024 [MB] (21 MBps) [2024-12-09T14:15:14.409Z] Copying: 928/1024 [MB] (27 MBps) [2024-12-09T14:15:15.347Z] Copying: 955/1024 [MB] (26 MBps) [2024-12-09T14:15:16.731Z] Copying: 978/1024 [MB] (23 MBps) [2024-12-09T14:15:17.300Z] Copying: 1003/1024 [MB] (24 MBps) [2024-12-09T14:15:17.300Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-09 14:15:17.155464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.506 [2024-12-09 14:15:17.155500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:15.506 [2024-12-09 14:15:17.155511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:15.506 [2024-12-09 14:15:17.155517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.506 [2024-12-09 14:15:17.155534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:15.506 [2024-12-09 14:15:17.157734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.506 [2024-12-09 14:15:17.157855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:15.506 [2024-12-09 14:15:17.157876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:22:15.506 [2024-12-09 14:15:17.157882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.506 [2024-12-09 14:15:17.159266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.506 [2024-12-09 14:15:17.159294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:15.506 [2024-12-09 14:15:17.159301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:22:15.506 [2024-12-09 14:15:17.159307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.506 [2024-12-09 14:15:17.170555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.506 [2024-12-09 14:15:17.170581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:15.506 [2024-12-09 14:15:17.170589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.235 ms 00:22:15.506 [2024-12-09 14:15:17.170596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.506 [2024-12-09 14:15:17.175345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.506 [2024-12-09 14:15:17.175367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:15.506 [2024-12-09 14:15:17.175375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.723 ms 00:22:15.506 [2024-12-09 14:15:17.175382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.506 [2024-12-09 14:15:17.193628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.506 [2024-12-09 14:15:17.193654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:15.506 [2024-12-09 14:15:17.193662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.208 ms 00:22:15.506 [2024-12-09 14:15:17.193667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.506 [2024-12-09 14:15:17.204523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.507 [2024-12-09 14:15:17.204558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:15.507 [2024-12-09 14:15:17.204568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.830 ms 00:22:15.507 [2024-12-09 14:15:17.204574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.507 [2024-12-09 14:15:17.204662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.507 [2024-12-09 14:15:17.204672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:15.507 [2024-12-09 14:15:17.204678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:15.507 [2024-12-09 14:15:17.204684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.507 [2024-12-09 14:15:17.222583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.507 [2024-12-09 14:15:17.222607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:15.507 [2024-12-09 14:15:17.222614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.889 ms 00:22:15.507 [2024-12-09 14:15:17.222623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.507 [2024-12-09 14:15:17.239846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.507 [2024-12-09 14:15:17.239870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:15.507 [2024-12-09 14:15:17.239877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.198 ms 00:22:15.507 [2024-12-09 14:15:17.239883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.507 [2024-12-09 14:15:17.256884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.507 [2024-12-09 14:15:17.256991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:15.507 [2024-12-09 14:15:17.257004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.977 ms 00:22:15.507 [2024-12-09 14:15:17.257010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.507 [2024-12-09 14:15:17.274060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.507 [2024-12-09 14:15:17.274086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:15.507 [2024-12-09 14:15:17.274093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.011 ms 00:22:15.507 [2024-12-09 14:15:17.274098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.507 [2024-12-09 14:15:17.274123] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:15.507 [2024-12-09 14:15:17.274133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:15.507 [2024-12-09 14:15:17.274455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:15.508 [2024-12-09 14:15:17.274746] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:15.508 [2024-12-09 14:15:17.274753] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05373d07-4b4e-457d-b347-de8cd136f1a9 00:22:15.508 [2024-12-09 14:15:17.274759] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:15.508 [2024-12-09 14:15:17.274764] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:15.508 [2024-12-09 14:15:17.274769] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:15.508 [2024-12-09 14:15:17.274775] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:15.508 [2024-12-09 14:15:17.274781] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:15.508 [2024-12-09 14:15:17.274791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:15.508 [2024-12-09 14:15:17.274796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:15.508 [2024-12-09 14:15:17.274801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:15.508 [2024-12-09 14:15:17.274806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:15.508 [2024-12-09 14:15:17.274811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.508 [2024-12-09 14:15:17.274816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:15.508 [2024-12-09 14:15:17.274822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:22:15.508 [2024-12-09 14:15:17.274827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.508 [2024-12-09 14:15:17.284355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.508 [2024-12-09 14:15:17.284458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:15.508 [2024-12-09 14:15:17.284470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.515 ms 00:22:15.508 [2024-12-09 14:15:17.284476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.508 [2024-12-09 14:15:17.284753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.508 [2024-12-09 14:15:17.284760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:15.508 [2024-12-09 14:15:17.284766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:22:15.508 [2024-12-09 14:15:17.284776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.769 [2024-12-09 14:15:17.310745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.769 [2024-12-09 14:15:17.310835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:15.769 [2024-12-09 14:15:17.310847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.769 [2024-12-09 14:15:17.310853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.769 [2024-12-09 14:15:17.310890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.769 [2024-12-09 14:15:17.310896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:15.769 [2024-12-09 14:15:17.310902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.769 [2024-12-09 14:15:17.310911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.769 [2024-12-09 14:15:17.310951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.770 [2024-12-09 14:15:17.310958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:15.770 [2024-12-09 14:15:17.310964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.770 [2024-12-09 14:15:17.310969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.770 [2024-12-09 14:15:17.310980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.770 [2024-12-09 14:15:17.310986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:15.770 [2024-12-09 14:15:17.310992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.770 [2024-12-09 14:15:17.310997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.770 [2024-12-09 14:15:17.371063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.770 [2024-12-09 14:15:17.371094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:15.770 [2024-12-09 14:15:17.371103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.770 [2024-12-09 14:15:17.371109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.770 [2024-12-09 14:15:17.420179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.770 [2024-12-09 14:15:17.420210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:15.770 [2024-12-09 14:15:17.420219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.770 [2024-12-09 14:15:17.420229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.770 [2024-12-09 14:15:17.420282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.770 [2024-12-09 14:15:17.420289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.770 [2024-12-09 14:15:17.420295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.770 [2024-12-09 14:15:17.420301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.770 [2024-12-09 14:15:17.420328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.770 [2024-12-09 14:15:17.420334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.770 [2024-12-09 14:15:17.420340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.770 [2024-12-09 14:15:17.420346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.770 [2024-12-09 14:15:17.420413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.770 [2024-12-09 14:15:17.420421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.770 [2024-12-09 14:15:17.420427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.770 [2024-12-09 14:15:17.420432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.770 [2024-12-09 14:15:17.420454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.770 [2024-12-09 14:15:17.420460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:15.770 [2024-12-09 14:15:17.420466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.770 [2024-12-09 14:15:17.420472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.770 [2024-12-09 14:15:17.420498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.770 [2024-12-09 14:15:17.420506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.770 [2024-12-09 14:15:17.420513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.770 [2024-12-09 14:15:17.420519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.770 [2024-12-09 14:15:17.420568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.770 [2024-12-09 14:15:17.420576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.770 [2024-12-09 14:15:17.420595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.770 [2024-12-09 14:15:17.420601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.770 [2024-12-09 14:15:17.420689] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.201 ms, result 0 00:22:16.707 00:22:16.707 00:22:16.707 14:15:18 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:16.707 [2024-12-09 14:15:18.323157] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:22:16.707 [2024-12-09 14:15:18.323239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77988 ] 00:22:16.707 [2024-12-09 14:15:18.471312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.965 [2024-12-09 14:15:18.547147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.965 [2024-12-09 14:15:18.756339] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.965 [2024-12-09 14:15:18.756393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:17.227 [2024-12-09 14:15:18.912815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.912859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:17.227 [2024-12-09 14:15:18.912872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:17.227 [2024-12-09 14:15:18.912879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.912926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.912938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:17.227 [2024-12-09 14:15:18.912946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:17.227 [2024-12-09 14:15:18.912953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.912969] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:17.227 [2024-12-09 14:15:18.913707] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:17.227 [2024-12-09 14:15:18.913724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.913732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:17.227 [2024-12-09 14:15:18.913741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:22:17.227 [2024-12-09 14:15:18.913747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.914807] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:17.227 [2024-12-09 14:15:18.927304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.927338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:17.227 [2024-12-09 14:15:18.927349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.499 ms 00:22:17.227 [2024-12-09 14:15:18.927356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.927409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.927418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:17.227 [2024-12-09 14:15:18.927427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:17.227 [2024-12-09 14:15:18.927434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.932336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.932364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:17.227 [2024-12-09 14:15:18.932374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.854 ms 00:22:17.227 [2024-12-09 14:15:18.932385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.932449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.932458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:17.227 [2024-12-09 14:15:18.932466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:17.227 [2024-12-09 14:15:18.932473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.932517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.932527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:17.227 [2024-12-09 14:15:18.932550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:17.227 [2024-12-09 14:15:18.932558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.932580] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:17.227 [2024-12-09 14:15:18.935871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.935998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:17.227 [2024-12-09 14:15:18.936018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.295 ms 00:22:17.227 [2024-12-09 14:15:18.936025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.936057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.936065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:17.227 [2024-12-09 14:15:18.936072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:17.227 [2024-12-09 14:15:18.936079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.936097] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:17.227 [2024-12-09 14:15:18.936116] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:17.227 [2024-12-09 14:15:18.936150] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:17.227 [2024-12-09 14:15:18.936167] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:17.227 [2024-12-09 14:15:18.936270] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:17.227 [2024-12-09 14:15:18.936280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:17.227 [2024-12-09 14:15:18.936290] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:17.227 [2024-12-09 14:15:18.936299] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:17.227 [2024-12-09 14:15:18.936308] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:17.227 [2024-12-09 14:15:18.936316] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:17.227 [2024-12-09 14:15:18.936323] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:17.227 [2024-12-09 14:15:18.936332] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:17.227 [2024-12-09 14:15:18.936339] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:17.227 [2024-12-09 14:15:18.936347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.936354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:17.227 [2024-12-09 14:15:18.936362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:22:17.227 [2024-12-09 14:15:18.936368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.936454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.227 [2024-12-09 14:15:18.936462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:17.227 [2024-12-09 14:15:18.936469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:17.227 [2024-12-09 14:15:18.936476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.227 [2024-12-09 14:15:18.936605] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:17.227 [2024-12-09 14:15:18.936617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:17.227 [2024-12-09 14:15:18.936625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:17.227 [2024-12-09 14:15:18.936632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.227 [2024-12-09 14:15:18.936640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:17.227 [2024-12-09 14:15:18.936646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:17.227 [2024-12-09 14:15:18.936653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:17.227 [2024-12-09 14:15:18.936659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:17.227 [2024-12-09 14:15:18.936667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:17.227 [2024-12-09 14:15:18.936674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:17.227 [2024-12-09 14:15:18.936681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:17.227 [2024-12-09 14:15:18.936688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:17.227 [2024-12-09 14:15:18.936694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:17.227 [2024-12-09 14:15:18.936706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:17.227 [2024-12-09 14:15:18.936712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:17.227 [2024-12-09 14:15:18.936719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.227 [2024-12-09 14:15:18.936725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:17.227 [2024-12-09 14:15:18.936732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:17.227 [2024-12-09 14:15:18.936738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.227 [2024-12-09 14:15:18.936745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:17.227 [2024-12-09 14:15:18.936752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:17.227 [2024-12-09 14:15:18.936758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.227 [2024-12-09 14:15:18.936765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:17.227 [2024-12-09 14:15:18.936771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:17.228 [2024-12-09 14:15:18.936778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.228 [2024-12-09 14:15:18.936784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:17.228 [2024-12-09 14:15:18.936791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:17.228 [2024-12-09 14:15:18.936797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.228 [2024-12-09 14:15:18.936803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:17.228 [2024-12-09 14:15:18.936810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:17.228 [2024-12-09 14:15:18.936816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.228 [2024-12-09 14:15:18.936822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:17.228 [2024-12-09 14:15:18.936829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:17.228 [2024-12-09 14:15:18.936835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:17.228 [2024-12-09 14:15:18.936841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:17.228 [2024-12-09 14:15:18.936847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:17.228 [2024-12-09 14:15:18.936862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:17.228 [2024-12-09 14:15:18.936868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:17.228 [2024-12-09 14:15:18.936875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:17.228 [2024-12-09 14:15:18.936881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.228 [2024-12-09 14:15:18.936887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:17.228 [2024-12-09 14:15:18.936893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:17.228 [2024-12-09 14:15:18.936899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.228 [2024-12-09 14:15:18.936906] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:17.228 [2024-12-09 14:15:18.936913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:17.228 [2024-12-09 14:15:18.936920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:17.228 [2024-12-09 14:15:18.936927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.228 [2024-12-09 14:15:18.936934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:17.228 [2024-12-09 14:15:18.936941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:17.228 [2024-12-09 14:15:18.936948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:17.228 [2024-12-09 14:15:18.936954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:17.228 [2024-12-09 14:15:18.936960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:17.228 [2024-12-09 14:15:18.936966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:17.228 [2024-12-09 14:15:18.936974] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:17.228 [2024-12-09 14:15:18.936986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:17.228 [2024-12-09 14:15:18.936996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:17.228 [2024-12-09 14:15:18.937003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:17.228 [2024-12-09 14:15:18.937010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:17.228 [2024-12-09 14:15:18.937017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:17.228 [2024-12-09 14:15:18.937023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:17.228 [2024-12-09 14:15:18.937030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:17.228 [2024-12-09 14:15:18.937037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:17.228 [2024-12-09 14:15:18.937043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:17.228 [2024-12-09 14:15:18.937050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:17.228 [2024-12-09 14:15:18.937057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:17.228 [2024-12-09 14:15:18.937064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:17.228 [2024-12-09 14:15:18.937070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:17.228 [2024-12-09 14:15:18.937077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:17.228 [2024-12-09 14:15:18.937084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:17.228 [2024-12-09 14:15:18.937091] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:17.228 [2024-12-09 14:15:18.937100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:17.228 [2024-12-09 14:15:18.937107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:17.228 [2024-12-09 14:15:18.937114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:17.228 [2024-12-09 14:15:18.937121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:17.228 [2024-12-09 14:15:18.937128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:17.228 [2024-12-09 14:15:18.937141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.228 [2024-12-09 14:15:18.937148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:17.228 [2024-12-09 14:15:18.937166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:22:17.228 [2024-12-09 14:15:18.937173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.228 [2024-12-09 14:15:18.962699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.228 [2024-12-09 14:15:18.962828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:17.228 [2024-12-09 14:15:18.962843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.483 ms 00:22:17.228 [2024-12-09 14:15:18.962856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.228 [2024-12-09 14:15:18.962938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.228 [2024-12-09 14:15:18.962946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:17.228 [2024-12-09 14:15:18.962954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:17.228 [2024-12-09 14:15:18.962961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.228 [2024-12-09 14:15:19.004852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.228 [2024-12-09 14:15:19.004986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:17.228 [2024-12-09 14:15:19.005004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.842 ms 00:22:17.228 [2024-12-09 14:15:19.005013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.228 [2024-12-09 14:15:19.005050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.228 [2024-12-09 14:15:19.005059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:17.228 [2024-12-09 14:15:19.005072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:17.228 [2024-12-09 14:15:19.005079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.228 [2024-12-09 14:15:19.005447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.228 [2024-12-09 14:15:19.005463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:17.228 [2024-12-09 14:15:19.005471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:22:17.228 [2024-12-09 14:15:19.005478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.228 [2024-12-09 14:15:19.005621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.228 [2024-12-09 14:15:19.005631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:17.228 [2024-12-09 14:15:19.005644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:22:17.228 [2024-12-09 14:15:19.005652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.018554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.018583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:17.489 [2024-12-09 14:15:19.018593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.864 ms 00:22:17.489 [2024-12-09 14:15:19.018600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.030863] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:17.489 [2024-12-09 14:15:19.030894] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:17.489 [2024-12-09 14:15:19.030906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.030914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:17.489 [2024-12-09 14:15:19.030923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.219 ms 00:22:17.489 [2024-12-09 14:15:19.030930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.055504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.055555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:17.489 [2024-12-09 14:15:19.055569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.167 ms 00:22:17.489 [2024-12-09 14:15:19.055578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.066720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.066844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:17.489 [2024-12-09 14:15:19.066860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.098 ms 00:22:17.489 [2024-12-09 14:15:19.066867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.077686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.077789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:17.489 [2024-12-09 14:15:19.077803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.789 ms 00:22:17.489 [2024-12-09 14:15:19.077811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.078391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.078413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:17.489 [2024-12-09 14:15:19.078424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:22:17.489 [2024-12-09 14:15:19.078431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.131668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.131710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:17.489 [2024-12-09 14:15:19.131726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.221 ms 00:22:17.489 [2024-12-09 14:15:19.131735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.141815] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:17.489 [2024-12-09 14:15:19.143941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.143968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:17.489 [2024-12-09 14:15:19.143980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.167 ms 00:22:17.489 [2024-12-09 14:15:19.143988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.144074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.144085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:17.489 [2024-12-09 14:15:19.144097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:17.489 [2024-12-09 14:15:19.144105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.144173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.144183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:17.489 [2024-12-09 14:15:19.144191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:17.489 [2024-12-09 14:15:19.144198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.489 [2024-12-09 14:15:19.144216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.489 [2024-12-09 14:15:19.144224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:17.489 [2024-12-09 14:15:19.144232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:17.490 [2024-12-09 14:15:19.144239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.490 [2024-12-09 14:15:19.144270] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:17.490 [2024-12-09 14:15:19.144280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.490 [2024-12-09 14:15:19.144287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:17.490 [2024-12-09 14:15:19.144295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:17.490 [2024-12-09 14:15:19.144302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.490 [2024-12-09 14:15:19.167654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.490 [2024-12-09 14:15:19.167773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:17.490 [2024-12-09 14:15:19.167793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.336 ms 00:22:17.490 [2024-12-09 14:15:19.167801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.490 [2024-12-09 14:15:19.167863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.490 [2024-12-09 14:15:19.167872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:17.490 [2024-12-09 14:15:19.167881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:17.490 [2024-12-09 14:15:19.167888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.490 [2024-12-09 14:15:19.168899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 255.644 ms, result 0 00:22:18.870  [2024-12-09T14:15:21.605Z] Copying: 29/1024 [MB] (29 MBps) [2024-12-09T14:15:22.547Z] Copying: 55/1024 [MB] (26 MBps) [2024-12-09T14:15:23.490Z] Copying: 76/1024 [MB] (21 MBps) [2024-12-09T14:15:24.432Z] Copying: 95/1024 [MB] (18 MBps) [2024-12-09T14:15:25.375Z] Copying: 118/1024 [MB] (22 MBps) [2024-12-09T14:15:26.762Z] Copying: 139/1024 [MB] (21 MBps) [2024-12-09T14:15:27.707Z] Copying: 155/1024 [MB] (15 MBps) [2024-12-09T14:15:28.651Z] Copying: 184/1024 [MB] (29 MBps) [2024-12-09T14:15:29.592Z] Copying: 202/1024 [MB] (17 MBps) [2024-12-09T14:15:30.533Z] Copying: 213/1024 [MB] (11 MBps) [2024-12-09T14:15:31.476Z] Copying: 228/1024 [MB] (14 MBps) [2024-12-09T14:15:32.419Z] Copying: 245/1024 [MB] (17 MBps) [2024-12-09T14:15:33.362Z] Copying: 261/1024 [MB] (15 MBps) [2024-12-09T14:15:34.748Z] Copying: 279/1024 [MB] (18 MBps) [2024-12-09T14:15:35.691Z] Copying: 301/1024 [MB] (21 MBps) [2024-12-09T14:15:36.642Z] Copying: 323/1024 [MB] (22 MBps) [2024-12-09T14:15:37.666Z] Copying: 345/1024 [MB] (22 MBps) [2024-12-09T14:15:38.609Z] Copying: 368/1024 [MB] (23 MBps) [2024-12-09T14:15:39.554Z] Copying: 390/1024 [MB] (21 MBps) [2024-12-09T14:15:40.497Z] Copying: 421/1024 [MB] (31 MBps) [2024-12-09T14:15:41.441Z] Copying: 443/1024 [MB] (21 MBps) [2024-12-09T14:15:42.386Z] Copying: 457/1024 [MB] (13 MBps) [2024-12-09T14:15:43.772Z] Copying: 483/1024 [MB] (26 MBps) [2024-12-09T14:15:44.344Z] Copying: 495/1024 [MB] (12 MBps) [2024-12-09T14:15:45.742Z] Copying: 518/1024 [MB] (23 MBps) [2024-12-09T14:15:46.686Z] Copying: 535/1024 [MB] (17 MBps) [2024-12-09T14:15:47.630Z] Copying: 554/1024 [MB] (18 MBps) [2024-12-09T14:15:48.575Z] Copying: 569/1024 [MB] (15 MBps) [2024-12-09T14:15:49.520Z] Copying: 582/1024 [MB] (12 MBps) [2024-12-09T14:15:50.460Z] Copying: 594/1024 [MB] (12 MBps) [2024-12-09T14:15:51.403Z] Copying: 610/1024 [MB] (15 MBps) [2024-12-09T14:15:52.347Z] Copying: 624/1024 [MB] (14 MBps) [2024-12-09T14:15:53.735Z] Copying: 638/1024 [MB] (14 MBps) [2024-12-09T14:15:54.681Z] Copying: 650/1024 [MB] (12 MBps) [2024-12-09T14:15:55.626Z] Copying: 664/1024 [MB] (14 MBps) [2024-12-09T14:15:56.570Z] Copying: 675/1024 [MB] (10 MBps) [2024-12-09T14:15:57.516Z] Copying: 686/1024 [MB] (11 MBps) [2024-12-09T14:15:58.458Z] Copying: 700/1024 [MB] (14 MBps) [2024-12-09T14:15:59.404Z] Copying: 716/1024 [MB] (15 MBps) [2024-12-09T14:16:00.360Z] Copying: 731/1024 [MB] (15 MBps) [2024-12-09T14:16:01.747Z] Copying: 746/1024 [MB] (15 MBps) [2024-12-09T14:16:02.690Z] Copying: 759/1024 [MB] (12 MBps) [2024-12-09T14:16:03.637Z] Copying: 770/1024 [MB] (10 MBps) [2024-12-09T14:16:04.580Z] Copying: 781/1024 [MB] (10 MBps) [2024-12-09T14:16:05.524Z] Copying: 791/1024 [MB] (10 MBps) [2024-12-09T14:16:06.468Z] Copying: 802/1024 [MB] (10 MBps) [2024-12-09T14:16:07.411Z] Copying: 813/1024 [MB] (10 MBps) [2024-12-09T14:16:08.355Z] Copying: 824/1024 [MB] (11 MBps) [2024-12-09T14:16:09.742Z] Copying: 835/1024 [MB] (11 MBps) [2024-12-09T14:16:10.687Z] Copying: 846/1024 [MB] (11 MBps) [2024-12-09T14:16:11.630Z] Copying: 858/1024 [MB] (11 MBps) [2024-12-09T14:16:12.574Z] Copying: 870/1024 [MB] (12 MBps) [2024-12-09T14:16:13.518Z] Copying: 882/1024 [MB] (11 MBps) [2024-12-09T14:16:14.462Z] Copying: 894/1024 [MB] (12 MBps) [2024-12-09T14:16:15.406Z] Copying: 907/1024 [MB] (13 MBps) [2024-12-09T14:16:16.355Z] Copying: 919/1024 [MB] (11 MBps) [2024-12-09T14:16:17.783Z] Copying: 930/1024 [MB] (11 MBps) [2024-12-09T14:16:18.357Z] Copying: 943/1024 [MB] (12 MBps) [2024-12-09T14:16:19.748Z] Copying: 954/1024 [MB] (11 MBps) [2024-12-09T14:16:20.692Z] Copying: 966/1024 [MB] (11 MBps) [2024-12-09T14:16:21.635Z] Copying: 978/1024 [MB] (11 MBps) [2024-12-09T14:16:22.579Z] Copying: 991/1024 [MB] (13 MBps) [2024-12-09T14:16:23.521Z] Copying: 1005/1024 [MB] (13 MBps) [2024-12-09T14:16:23.781Z] Copying: 1018/1024 [MB] (13 MBps) [2024-12-09T14:16:24.355Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-09 14:16:24.063088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.063320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:22.561 [2024-12-09 14:16:24.063340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:22.561 [2024-12-09 14:16:24.063349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.063378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:22.561 [2024-12-09 14:16:24.066007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.066043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:22.561 [2024-12-09 14:16:24.066053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.614 ms 00:23:22.561 [2024-12-09 14:16:24.066061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.066279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.066288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:22.561 [2024-12-09 14:16:24.066297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:23:22.561 [2024-12-09 14:16:24.066305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.069825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.069908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:22.561 [2024-12-09 14:16:24.069957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.507 ms 00:23:22.561 [2024-12-09 14:16:24.069985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.076765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.076861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:22.561 [2024-12-09 14:16:24.076910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.750 ms 00:23:22.561 [2024-12-09 14:16:24.076931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.101611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.101723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:22.561 [2024-12-09 14:16:24.101774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.613 ms 00:23:22.561 [2024-12-09 14:16:24.101797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.116800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.116915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:22.561 [2024-12-09 14:16:24.116968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.923 ms 00:23:22.561 [2024-12-09 14:16:24.116979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.117170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.117183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:22.561 [2024-12-09 14:16:24.117192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:22.561 [2024-12-09 14:16:24.117200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.142246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.142277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:22.561 [2024-12-09 14:16:24.142287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.031 ms 00:23:22.561 [2024-12-09 14:16:24.142295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.165982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.166097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:22.561 [2024-12-09 14:16:24.166112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.657 ms 00:23:22.561 [2024-12-09 14:16:24.166120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.198717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.198761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:22.561 [2024-12-09 14:16:24.198777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.568 ms 00:23:22.561 [2024-12-09 14:16:24.198789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.561 [2024-12-09 14:16:24.236203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.561 [2024-12-09 14:16:24.236249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:22.562 [2024-12-09 14:16:24.236267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.327 ms 00:23:22.562 [2024-12-09 14:16:24.236279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.562 [2024-12-09 14:16:24.236324] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:22.562 [2024-12-09 14:16:24.236349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.236993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:22.562 [2024-12-09 14:16:24.237484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:22.563 [2024-12-09 14:16:24.237625] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:22.563 [2024-12-09 14:16:24.237637] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05373d07-4b4e-457d-b347-de8cd136f1a9 00:23:22.563 [2024-12-09 14:16:24.237649] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:22.563 [2024-12-09 14:16:24.237661] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:22.563 [2024-12-09 14:16:24.237673] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:22.563 [2024-12-09 14:16:24.237686] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:22.563 [2024-12-09 14:16:24.237705] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:22.563 [2024-12-09 14:16:24.237718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:22.563 [2024-12-09 14:16:24.237731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:22.563 [2024-12-09 14:16:24.237743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:22.563 [2024-12-09 14:16:24.237755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:22.563 [2024-12-09 14:16:24.237768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.563 [2024-12-09 14:16:24.237782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:22.563 [2024-12-09 14:16:24.237796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:23:22.563 [2024-12-09 14:16:24.237812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.563 [2024-12-09 14:16:24.258181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.563 [2024-12-09 14:16:24.258222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:22.563 [2024-12-09 14:16:24.258236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.326 ms 00:23:22.563 [2024-12-09 14:16:24.258247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.563 [2024-12-09 14:16:24.258793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.563 [2024-12-09 14:16:24.258823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:22.563 [2024-12-09 14:16:24.258841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:23:22.563 [2024-12-09 14:16:24.258853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.563 [2024-12-09 14:16:24.312273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.563 [2024-12-09 14:16:24.312446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:22.563 [2024-12-09 14:16:24.312469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.563 [2024-12-09 14:16:24.312484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.563 [2024-12-09 14:16:24.312583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.563 [2024-12-09 14:16:24.312598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:22.563 [2024-12-09 14:16:24.312617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.563 [2024-12-09 14:16:24.312628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.563 [2024-12-09 14:16:24.312724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.563 [2024-12-09 14:16:24.312739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:22.563 [2024-12-09 14:16:24.312752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.563 [2024-12-09 14:16:24.312764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.563 [2024-12-09 14:16:24.312786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.563 [2024-12-09 14:16:24.312797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:22.563 [2024-12-09 14:16:24.312810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.563 [2024-12-09 14:16:24.312826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.824 [2024-12-09 14:16:24.406205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.824 [2024-12-09 14:16:24.406344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:22.824 [2024-12-09 14:16:24.406360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.824 [2024-12-09 14:16:24.406369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.824 [2024-12-09 14:16:24.469088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.825 [2024-12-09 14:16:24.469219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.825 [2024-12-09 14:16:24.469237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.825 [2024-12-09 14:16:24.469245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.825 [2024-12-09 14:16:24.469293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.825 [2024-12-09 14:16:24.469302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.825 [2024-12-09 14:16:24.469310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.825 [2024-12-09 14:16:24.469317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.825 [2024-12-09 14:16:24.469365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.825 [2024-12-09 14:16:24.469374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.825 [2024-12-09 14:16:24.469381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.825 [2024-12-09 14:16:24.469389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.825 [2024-12-09 14:16:24.469474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.825 [2024-12-09 14:16:24.469483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.825 [2024-12-09 14:16:24.469492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.825 [2024-12-09 14:16:24.469500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.825 [2024-12-09 14:16:24.469529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.825 [2024-12-09 14:16:24.469558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:22.825 [2024-12-09 14:16:24.469567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.825 [2024-12-09 14:16:24.469574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.825 [2024-12-09 14:16:24.469610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.825 [2024-12-09 14:16:24.469619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.825 [2024-12-09 14:16:24.469626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.825 [2024-12-09 14:16:24.469633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.825 [2024-12-09 14:16:24.469670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.825 [2024-12-09 14:16:24.469679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.825 [2024-12-09 14:16:24.469687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.825 [2024-12-09 14:16:24.469694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.825 [2024-12-09 14:16:24.469800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.689 ms, result 0 00:23:23.398 00:23:23.398 00:23:23.398 14:16:25 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:25.942 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:25.942 14:16:27 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:25.943 [2024-12-09 14:16:27.338587] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:23:25.943 [2024-12-09 14:16:27.338848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78694 ] 00:23:25.943 [2024-12-09 14:16:27.499300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.943 [2024-12-09 14:16:27.593091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.204 [2024-12-09 14:16:27.850890] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.204 [2024-12-09 14:16:27.850951] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.467 [2024-12-09 14:16:28.006689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.467 [2024-12-09 14:16:28.006732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:26.467 [2024-12-09 14:16:28.006744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:26.467 [2024-12-09 14:16:28.006752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.467 [2024-12-09 14:16:28.006796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.467 [2024-12-09 14:16:28.006809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:26.467 [2024-12-09 14:16:28.006817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:26.467 [2024-12-09 14:16:28.006825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.467 [2024-12-09 14:16:28.006841] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:26.467 [2024-12-09 14:16:28.007486] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:26.467 [2024-12-09 14:16:28.007501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.467 [2024-12-09 14:16:28.007508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:26.467 [2024-12-09 14:16:28.007516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:23:26.467 [2024-12-09 14:16:28.007523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.467 [2024-12-09 14:16:28.008577] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:26.467 [2024-12-09 14:16:28.021110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.467 [2024-12-09 14:16:28.021154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:26.467 [2024-12-09 14:16:28.021167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.534 ms 00:23:26.467 [2024-12-09 14:16:28.021175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.467 [2024-12-09 14:16:28.021228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.467 [2024-12-09 14:16:28.021237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:26.467 [2024-12-09 14:16:28.021245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:26.467 [2024-12-09 14:16:28.021252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.467 [2024-12-09 14:16:28.026041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.467 [2024-12-09 14:16:28.026067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:26.467 [2024-12-09 14:16:28.026076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.741 ms 00:23:26.467 [2024-12-09 14:16:28.026087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.467 [2024-12-09 14:16:28.026150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.467 [2024-12-09 14:16:28.026158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:26.467 [2024-12-09 14:16:28.026166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:26.467 [2024-12-09 14:16:28.026173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.467 [2024-12-09 14:16:28.026217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.467 [2024-12-09 14:16:28.026227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:26.467 [2024-12-09 14:16:28.026235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:26.467 [2024-12-09 14:16:28.026241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.467 [2024-12-09 14:16:28.026265] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:26.467 [2024-12-09 14:16:28.029587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.467 [2024-12-09 14:16:28.029618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:26.468 [2024-12-09 14:16:28.029635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:23:26.468 [2024-12-09 14:16:28.029641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.468 [2024-12-09 14:16:28.029670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.468 [2024-12-09 14:16:28.029679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:26.468 [2024-12-09 14:16:28.029686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:26.468 [2024-12-09 14:16:28.029693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.468 [2024-12-09 14:16:28.029711] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:26.468 [2024-12-09 14:16:28.029729] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:26.468 [2024-12-09 14:16:28.029762] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:26.468 [2024-12-09 14:16:28.029779] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:26.468 [2024-12-09 14:16:28.029881] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:26.468 [2024-12-09 14:16:28.029891] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:26.468 [2024-12-09 14:16:28.029901] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:26.468 [2024-12-09 14:16:28.029911] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:26.468 [2024-12-09 14:16:28.029919] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:26.468 [2024-12-09 14:16:28.029927] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:26.468 [2024-12-09 14:16:28.029934] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:26.468 [2024-12-09 14:16:28.029943] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:26.468 [2024-12-09 14:16:28.029950] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:26.468 [2024-12-09 14:16:28.029957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.468 [2024-12-09 14:16:28.029964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:26.468 [2024-12-09 14:16:28.029971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:23:26.468 [2024-12-09 14:16:28.029978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.468 [2024-12-09 14:16:28.030059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.468 [2024-12-09 14:16:28.030067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:26.468 [2024-12-09 14:16:28.030074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:26.468 [2024-12-09 14:16:28.030081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.468 [2024-12-09 14:16:28.030183] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:26.468 [2024-12-09 14:16:28.030192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:26.468 [2024-12-09 14:16:28.030200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.468 [2024-12-09 14:16:28.030208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:26.468 [2024-12-09 14:16:28.030221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:26.468 [2024-12-09 14:16:28.030235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:26.468 [2024-12-09 14:16:28.030242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.468 [2024-12-09 14:16:28.030255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:26.468 [2024-12-09 14:16:28.030261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:26.468 [2024-12-09 14:16:28.030269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.468 [2024-12-09 14:16:28.030280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:26.468 [2024-12-09 14:16:28.030287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:26.468 [2024-12-09 14:16:28.030293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:26.468 [2024-12-09 14:16:28.030306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:26.468 [2024-12-09 14:16:28.030312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:26.468 [2024-12-09 14:16:28.030326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.468 [2024-12-09 14:16:28.030338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:26.468 [2024-12-09 14:16:28.030344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.468 [2024-12-09 14:16:28.030357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:26.468 [2024-12-09 14:16:28.030363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.468 [2024-12-09 14:16:28.030376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:26.468 [2024-12-09 14:16:28.030382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.468 [2024-12-09 14:16:28.030394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:26.468 [2024-12-09 14:16:28.030401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.468 [2024-12-09 14:16:28.030413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:26.468 [2024-12-09 14:16:28.030419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:26.468 [2024-12-09 14:16:28.030425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.468 [2024-12-09 14:16:28.030432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:26.468 [2024-12-09 14:16:28.030438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:26.468 [2024-12-09 14:16:28.030444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:26.468 [2024-12-09 14:16:28.030457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:26.468 [2024-12-09 14:16:28.030463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030469] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:26.468 [2024-12-09 14:16:28.030478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:26.468 [2024-12-09 14:16:28.030485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.468 [2024-12-09 14:16:28.030492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.468 [2024-12-09 14:16:28.030499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:26.468 [2024-12-09 14:16:28.030506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:26.468 [2024-12-09 14:16:28.030512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:26.468 [2024-12-09 14:16:28.030518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:26.468 [2024-12-09 14:16:28.030524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:26.468 [2024-12-09 14:16:28.030531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:26.468 [2024-12-09 14:16:28.030733] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:26.468 [2024-12-09 14:16:28.030779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.468 [2024-12-09 14:16:28.030813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:26.468 [2024-12-09 14:16:28.030842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:26.468 [2024-12-09 14:16:28.030869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:26.468 [2024-12-09 14:16:28.030897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:26.468 [2024-12-09 14:16:28.030924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:26.468 [2024-12-09 14:16:28.030952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:26.468 [2024-12-09 14:16:28.030979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:26.468 [2024-12-09 14:16:28.031008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:26.468 [2024-12-09 14:16:28.031550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:26.468 [2024-12-09 14:16:28.031592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:26.468 [2024-12-09 14:16:28.031622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:26.468 [2024-12-09 14:16:28.031651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:26.468 [2024-12-09 14:16:28.031679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:26.468 [2024-12-09 14:16:28.031707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:26.468 [2024-12-09 14:16:28.031735] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:26.468 [2024-12-09 14:16:28.031757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.468 [2024-12-09 14:16:28.031765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:26.469 [2024-12-09 14:16:28.031772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:26.469 [2024-12-09 14:16:28.031780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:26.469 [2024-12-09 14:16:28.031787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:26.469 [2024-12-09 14:16:28.031796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.031806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:26.469 [2024-12-09 14:16:28.031815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:23:26.469 [2024-12-09 14:16:28.031823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.057406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.057439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:26.469 [2024-12-09 14:16:28.057450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.508 ms 00:23:26.469 [2024-12-09 14:16:28.057460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.057556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.057564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:26.469 [2024-12-09 14:16:28.057573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:26.469 [2024-12-09 14:16:28.057580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.099675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.099709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:26.469 [2024-12-09 14:16:28.099721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.043 ms 00:23:26.469 [2024-12-09 14:16:28.099729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.099765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.099774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:26.469 [2024-12-09 14:16:28.099785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:26.469 [2024-12-09 14:16:28.099792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.100145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.100159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:26.469 [2024-12-09 14:16:28.100167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:23:26.469 [2024-12-09 14:16:28.100175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.100294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.100303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:26.469 [2024-12-09 14:16:28.100315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:26.469 [2024-12-09 14:16:28.100323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.113217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.113246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:26.469 [2024-12-09 14:16:28.113255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.875 ms 00:23:26.469 [2024-12-09 14:16:28.113262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.125717] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:26.469 [2024-12-09 14:16:28.125747] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:26.469 [2024-12-09 14:16:28.125758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.125765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:26.469 [2024-12-09 14:16:28.125774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.411 ms 00:23:26.469 [2024-12-09 14:16:28.125782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.149491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.149519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:26.469 [2024-12-09 14:16:28.149530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.674 ms 00:23:26.469 [2024-12-09 14:16:28.149552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.161230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.161257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:26.469 [2024-12-09 14:16:28.161267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.647 ms 00:23:26.469 [2024-12-09 14:16:28.161273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.172570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.172594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:26.469 [2024-12-09 14:16:28.172603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.266 ms 00:23:26.469 [2024-12-09 14:16:28.172610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.173204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.173247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:26.469 [2024-12-09 14:16:28.173258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:23:26.469 [2024-12-09 14:16:28.173266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.227124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.227161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:26.469 [2024-12-09 14:16:28.227177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.842 ms 00:23:26.469 [2024-12-09 14:16:28.227185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.237243] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:26.469 [2024-12-09 14:16:28.239348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.239372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:26.469 [2024-12-09 14:16:28.239383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.124 ms 00:23:26.469 [2024-12-09 14:16:28.239392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.239473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.239485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:26.469 [2024-12-09 14:16:28.239497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:26.469 [2024-12-09 14:16:28.239505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.239580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.239591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:26.469 [2024-12-09 14:16:28.239601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:26.469 [2024-12-09 14:16:28.239609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.239629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.239638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:26.469 [2024-12-09 14:16:28.239647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:26.469 [2024-12-09 14:16:28.239655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.469 [2024-12-09 14:16:28.239688] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:26.469 [2024-12-09 14:16:28.239698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.469 [2024-12-09 14:16:28.239706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:26.469 [2024-12-09 14:16:28.239714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:26.469 [2024-12-09 14:16:28.239721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.731 [2024-12-09 14:16:28.262938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.731 [2024-12-09 14:16:28.262964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:26.731 [2024-12-09 14:16:28.262977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.200 ms 00:23:26.731 [2024-12-09 14:16:28.262985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.731 [2024-12-09 14:16:28.263049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.731 [2024-12-09 14:16:28.263058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:26.731 [2024-12-09 14:16:28.263066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:26.731 [2024-12-09 14:16:28.263073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.731 [2024-12-09 14:16:28.264237] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.151 ms, result 0 00:23:27.675  [2024-12-09T14:16:30.415Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-09T14:16:31.359Z] Copying: 30/1024 [MB] (18 MBps) [2024-12-09T14:16:32.304Z] Copying: 43/1024 [MB] (12 MBps) [2024-12-09T14:16:33.693Z] Copying: 56/1024 [MB] (13 MBps) [2024-12-09T14:16:34.637Z] Copying: 68/1024 [MB] (11 MBps) [2024-12-09T14:16:35.580Z] Copying: 83/1024 [MB] (15 MBps) [2024-12-09T14:16:36.518Z] Copying: 95/1024 [MB] (11 MBps) [2024-12-09T14:16:37.462Z] Copying: 133/1024 [MB] (38 MBps) [2024-12-09T14:16:38.399Z] Copying: 151/1024 [MB] (17 MBps) [2024-12-09T14:16:39.339Z] Copying: 185/1024 [MB] (33 MBps) [2024-12-09T14:16:40.283Z] Copying: 204/1024 [MB] (19 MBps) [2024-12-09T14:16:41.737Z] Copying: 226/1024 [MB] (22 MBps) [2024-12-09T14:16:42.331Z] Copying: 267/1024 [MB] (40 MBps) [2024-12-09T14:16:43.713Z] Copying: 287/1024 [MB] (19 MBps) [2024-12-09T14:16:44.286Z] Copying: 310/1024 [MB] (23 MBps) [2024-12-09T14:16:45.669Z] Copying: 330/1024 [MB] (19 MBps) [2024-12-09T14:16:46.613Z] Copying: 347/1024 [MB] (16 MBps) [2024-12-09T14:16:47.557Z] Copying: 368/1024 [MB] (21 MBps) [2024-12-09T14:16:48.499Z] Copying: 382/1024 [MB] (13 MBps) [2024-12-09T14:16:49.438Z] Copying: 404/1024 [MB] (22 MBps) [2024-12-09T14:16:50.381Z] Copying: 430/1024 [MB] (25 MBps) [2024-12-09T14:16:51.325Z] Copying: 452/1024 [MB] (22 MBps) [2024-12-09T14:16:52.707Z] Copying: 470/1024 [MB] (18 MBps) [2024-12-09T14:16:53.279Z] Copying: 491/1024 [MB] (20 MBps) [2024-12-09T14:16:54.664Z] Copying: 504/1024 [MB] (13 MBps) [2024-12-09T14:16:55.609Z] Copying: 522/1024 [MB] (18 MBps) [2024-12-09T14:16:56.626Z] Copying: 535/1024 [MB] (12 MBps) [2024-12-09T14:16:57.594Z] Copying: 549/1024 [MB] (13 MBps) [2024-12-09T14:16:58.538Z] Copying: 562/1024 [MB] (13 MBps) [2024-12-09T14:16:59.483Z] Copying: 575/1024 [MB] (12 MBps) [2024-12-09T14:17:00.428Z] Copying: 599192/1048576 [kB] (10212 kBps) [2024-12-09T14:17:01.373Z] Copying: 595/1024 [MB] (10 MBps) [2024-12-09T14:17:02.318Z] Copying: 620076/1048576 [kB] (10220 kBps) [2024-12-09T14:17:03.704Z] Copying: 615/1024 [MB] (10 MBps) [2024-12-09T14:17:04.648Z] Copying: 628/1024 [MB] (12 MBps) [2024-12-09T14:17:05.591Z] Copying: 640/1024 [MB] (12 MBps) [2024-12-09T14:17:06.541Z] Copying: 650/1024 [MB] (10 MBps) [2024-12-09T14:17:07.483Z] Copying: 660/1024 [MB] (10 MBps) [2024-12-09T14:17:08.429Z] Copying: 671/1024 [MB] (10 MBps) [2024-12-09T14:17:09.373Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-09T14:17:10.317Z] Copying: 707700/1048576 [kB] (10192 kBps) [2024-12-09T14:17:11.702Z] Copying: 701/1024 [MB] (10 MBps) [2024-12-09T14:17:12.646Z] Copying: 711/1024 [MB] (10 MBps) [2024-12-09T14:17:13.590Z] Copying: 722/1024 [MB] (11 MBps) [2024-12-09T14:17:14.534Z] Copying: 733/1024 [MB] (10 MBps) [2024-12-09T14:17:15.478Z] Copying: 744/1024 [MB] (10 MBps) [2024-12-09T14:17:16.414Z] Copying: 772016/1048576 [kB] (9764 kBps) [2024-12-09T14:17:17.357Z] Copying: 777/1024 [MB] (23 MBps) [2024-12-09T14:17:18.304Z] Copying: 787/1024 [MB] (10 MBps) [2024-12-09T14:17:19.693Z] Copying: 798/1024 [MB] (10 MBps) [2024-12-09T14:17:20.639Z] Copying: 808/1024 [MB] (10 MBps) [2024-12-09T14:17:21.585Z] Copying: 818/1024 [MB] (10 MBps) [2024-12-09T14:17:22.532Z] Copying: 829/1024 [MB] (10 MBps) [2024-12-09T14:17:23.488Z] Copying: 839/1024 [MB] (10 MBps) [2024-12-09T14:17:24.430Z] Copying: 849/1024 [MB] (10 MBps) [2024-12-09T14:17:25.371Z] Copying: 859/1024 [MB] (10 MBps) [2024-12-09T14:17:26.313Z] Copying: 870/1024 [MB] (10 MBps) [2024-12-09T14:17:27.699Z] Copying: 880/1024 [MB] (10 MBps) [2024-12-09T14:17:28.643Z] Copying: 891/1024 [MB] (11 MBps) [2024-12-09T14:17:29.588Z] Copying: 903/1024 [MB] (11 MBps) [2024-12-09T14:17:30.531Z] Copying: 914/1024 [MB] (11 MBps) [2024-12-09T14:17:31.476Z] Copying: 925/1024 [MB] (11 MBps) [2024-12-09T14:17:32.412Z] Copying: 937/1024 [MB] (11 MBps) [2024-12-09T14:17:33.349Z] Copying: 979/1024 [MB] (42 MBps) [2024-12-09T14:17:33.917Z] Copying: 1008/1024 [MB] (29 MBps) [2024-12-09T14:17:33.917Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-09 14:17:33.617163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.123 [2024-12-09 14:17:33.617209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:32.123 [2024-12-09 14:17:33.617222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:32.123 [2024-12-09 14:17:33.617231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.123 [2024-12-09 14:17:33.617250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:32.123 [2024-12-09 14:17:33.619839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.123 [2024-12-09 14:17:33.619875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:32.123 [2024-12-09 14:17:33.619885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:24:32.123 [2024-12-09 14:17:33.619893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.123 [2024-12-09 14:17:33.621241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.123 [2024-12-09 14:17:33.621271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:32.123 [2024-12-09 14:17:33.621281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:24:32.123 [2024-12-09 14:17:33.621288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.123 [2024-12-09 14:17:33.633479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.123 [2024-12-09 14:17:33.633511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:32.123 [2024-12-09 14:17:33.633521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.177 ms 00:24:32.123 [2024-12-09 14:17:33.633533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.123 [2024-12-09 14:17:33.639683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.123 [2024-12-09 14:17:33.639713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:32.123 [2024-12-09 14:17:33.639723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.082 ms 00:24:32.123 [2024-12-09 14:17:33.639731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.123 [2024-12-09 14:17:33.662772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.123 [2024-12-09 14:17:33.662804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:32.123 [2024-12-09 14:17:33.662814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.980 ms 00:24:32.123 [2024-12-09 14:17:33.662821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.124 [2024-12-09 14:17:33.676374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.124 [2024-12-09 14:17:33.676404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:32.124 [2024-12-09 14:17:33.676415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.523 ms 00:24:32.124 [2024-12-09 14:17:33.676424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.124 [2024-12-09 14:17:33.676550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.124 [2024-12-09 14:17:33.676560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:32.124 [2024-12-09 14:17:33.676569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:24:32.124 [2024-12-09 14:17:33.676576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.124 [2024-12-09 14:17:33.699355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.124 [2024-12-09 14:17:33.699383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:32.124 [2024-12-09 14:17:33.699393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.766 ms 00:24:32.124 [2024-12-09 14:17:33.699400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.124 [2024-12-09 14:17:33.721891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.124 [2024-12-09 14:17:33.721920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:32.124 [2024-12-09 14:17:33.721929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.462 ms 00:24:32.124 [2024-12-09 14:17:33.721937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.124 [2024-12-09 14:17:33.743882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.124 [2024-12-09 14:17:33.743910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:32.124 [2024-12-09 14:17:33.743919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.916 ms 00:24:32.124 [2024-12-09 14:17:33.743926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.124 [2024-12-09 14:17:33.765968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.124 [2024-12-09 14:17:33.765996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:32.124 [2024-12-09 14:17:33.766005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.993 ms 00:24:32.124 [2024-12-09 14:17:33.766012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.124 [2024-12-09 14:17:33.766041] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:32.124 [2024-12-09 14:17:33.766058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:32.124 [2024-12-09 14:17:33.766590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:32.125 [2024-12-09 14:17:33.766808] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:32.125 [2024-12-09 14:17:33.766815] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05373d07-4b4e-457d-b347-de8cd136f1a9 00:24:32.125 [2024-12-09 14:17:33.766822] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:32.125 [2024-12-09 14:17:33.766829] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:32.125 [2024-12-09 14:17:33.766836] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:32.125 [2024-12-09 14:17:33.766843] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:32.125 [2024-12-09 14:17:33.766855] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:32.125 [2024-12-09 14:17:33.766863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:32.125 [2024-12-09 14:17:33.766870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:32.125 [2024-12-09 14:17:33.766876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:32.125 [2024-12-09 14:17:33.766882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:32.125 [2024-12-09 14:17:33.766889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.125 [2024-12-09 14:17:33.766896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:32.125 [2024-12-09 14:17:33.766904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:24:32.125 [2024-12-09 14:17:33.766913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.125 [2024-12-09 14:17:33.778802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.125 [2024-12-09 14:17:33.778830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:32.125 [2024-12-09 14:17:33.778840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.875 ms 00:24:32.125 [2024-12-09 14:17:33.778848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.125 [2024-12-09 14:17:33.779179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.125 [2024-12-09 14:17:33.779194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:32.125 [2024-12-09 14:17:33.779207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:24:32.125 [2024-12-09 14:17:33.779214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.125 [2024-12-09 14:17:33.811532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.125 [2024-12-09 14:17:33.811569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:32.125 [2024-12-09 14:17:33.811578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.125 [2024-12-09 14:17:33.811585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.125 [2024-12-09 14:17:33.811633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.125 [2024-12-09 14:17:33.811641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:32.125 [2024-12-09 14:17:33.811652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.125 [2024-12-09 14:17:33.811659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.125 [2024-12-09 14:17:33.811709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.125 [2024-12-09 14:17:33.811718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:32.125 [2024-12-09 14:17:33.811726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.125 [2024-12-09 14:17:33.811734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.125 [2024-12-09 14:17:33.811747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.125 [2024-12-09 14:17:33.811755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:32.125 [2024-12-09 14:17:33.811762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.125 [2024-12-09 14:17:33.811771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.125 [2024-12-09 14:17:33.886375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.125 [2024-12-09 14:17:33.886416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:32.125 [2024-12-09 14:17:33.886426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.125 [2024-12-09 14:17:33.886434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.384 [2024-12-09 14:17:33.948050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.384 [2024-12-09 14:17:33.948089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:32.384 [2024-12-09 14:17:33.948104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.384 [2024-12-09 14:17:33.948112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.384 [2024-12-09 14:17:33.948177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.384 [2024-12-09 14:17:33.948186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:32.384 [2024-12-09 14:17:33.948194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.384 [2024-12-09 14:17:33.948201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.384 [2024-12-09 14:17:33.948233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.384 [2024-12-09 14:17:33.948241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:32.384 [2024-12-09 14:17:33.948248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.384 [2024-12-09 14:17:33.948255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.384 [2024-12-09 14:17:33.948436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.384 [2024-12-09 14:17:33.948445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:32.384 [2024-12-09 14:17:33.948453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.384 [2024-12-09 14:17:33.948460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.384 [2024-12-09 14:17:33.948488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.384 [2024-12-09 14:17:33.948497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:32.384 [2024-12-09 14:17:33.948504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.384 [2024-12-09 14:17:33.948511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.384 [2024-12-09 14:17:33.948564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.384 [2024-12-09 14:17:33.948573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:32.384 [2024-12-09 14:17:33.948581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.384 [2024-12-09 14:17:33.948588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.384 [2024-12-09 14:17:33.948625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.384 [2024-12-09 14:17:33.948634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:32.384 [2024-12-09 14:17:33.948642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.384 [2024-12-09 14:17:33.948649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.384 [2024-12-09 14:17:33.948756] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.566 ms, result 0 00:24:33.333 00:24:33.333 00:24:33.333 14:17:35 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:33.590 [2024-12-09 14:17:35.177998] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:24:33.591 [2024-12-09 14:17:35.178117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79383 ] 00:24:33.591 [2024-12-09 14:17:35.338911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.848 [2024-12-09 14:17:35.433826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.107 [2024-12-09 14:17:35.686647] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:34.107 [2024-12-09 14:17:35.686711] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:34.107 [2024-12-09 14:17:35.839996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.107 [2024-12-09 14:17:35.840048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:34.107 [2024-12-09 14:17:35.840061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:34.107 [2024-12-09 14:17:35.840069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.107 [2024-12-09 14:17:35.840111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.107 [2024-12-09 14:17:35.840123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:34.107 [2024-12-09 14:17:35.840131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:34.107 [2024-12-09 14:17:35.840139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.107 [2024-12-09 14:17:35.840157] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:34.107 [2024-12-09 14:17:35.840807] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:34.107 [2024-12-09 14:17:35.840835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.107 [2024-12-09 14:17:35.840843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:34.107 [2024-12-09 14:17:35.840851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:24:34.107 [2024-12-09 14:17:35.840858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.107 [2024-12-09 14:17:35.841884] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:34.107 [2024-12-09 14:17:35.853758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.107 [2024-12-09 14:17:35.853791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:34.107 [2024-12-09 14:17:35.853802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.875 ms 00:24:34.107 [2024-12-09 14:17:35.853810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.107 [2024-12-09 14:17:35.853864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.107 [2024-12-09 14:17:35.853873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:34.107 [2024-12-09 14:17:35.853882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:34.107 [2024-12-09 14:17:35.853890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.107 [2024-12-09 14:17:35.858492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.107 [2024-12-09 14:17:35.858522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:34.107 [2024-12-09 14:17:35.858531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.546 ms 00:24:34.107 [2024-12-09 14:17:35.858553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.107 [2024-12-09 14:17:35.858623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.107 [2024-12-09 14:17:35.858633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:34.107 [2024-12-09 14:17:35.858641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:34.107 [2024-12-09 14:17:35.858648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.107 [2024-12-09 14:17:35.858685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.107 [2024-12-09 14:17:35.858694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:34.107 [2024-12-09 14:17:35.858701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:34.107 [2024-12-09 14:17:35.858709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.107 [2024-12-09 14:17:35.858731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:34.107 [2024-12-09 14:17:35.862058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.107 [2024-12-09 14:17:35.862085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:34.107 [2024-12-09 14:17:35.862096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:24:34.107 [2024-12-09 14:17:35.862104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.107 [2024-12-09 14:17:35.862133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.107 [2024-12-09 14:17:35.862142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:34.107 [2024-12-09 14:17:35.862150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:34.107 [2024-12-09 14:17:35.862157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.107 [2024-12-09 14:17:35.862175] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:34.107 [2024-12-09 14:17:35.862194] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:34.107 [2024-12-09 14:17:35.862227] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:34.107 [2024-12-09 14:17:35.862244] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:34.108 [2024-12-09 14:17:35.862345] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:34.108 [2024-12-09 14:17:35.862363] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:34.108 [2024-12-09 14:17:35.862373] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:34.108 [2024-12-09 14:17:35.862383] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:34.108 [2024-12-09 14:17:35.862392] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:34.108 [2024-12-09 14:17:35.862400] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:34.108 [2024-12-09 14:17:35.862406] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:34.108 [2024-12-09 14:17:35.862416] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:34.108 [2024-12-09 14:17:35.862423] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:34.108 [2024-12-09 14:17:35.862430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.108 [2024-12-09 14:17:35.862437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:34.108 [2024-12-09 14:17:35.862444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:24:34.108 [2024-12-09 14:17:35.862451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.108 [2024-12-09 14:17:35.862533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.108 [2024-12-09 14:17:35.862553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:34.108 [2024-12-09 14:17:35.862561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:34.108 [2024-12-09 14:17:35.862567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.108 [2024-12-09 14:17:35.862681] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:34.108 [2024-12-09 14:17:35.862698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:34.108 [2024-12-09 14:17:35.862706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.108 [2024-12-09 14:17:35.862714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:34.108 [2024-12-09 14:17:35.862729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:34.108 [2024-12-09 14:17:35.862742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:34.108 [2024-12-09 14:17:35.862749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.108 [2024-12-09 14:17:35.862762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:34.108 [2024-12-09 14:17:35.862768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:34.108 [2024-12-09 14:17:35.862775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.108 [2024-12-09 14:17:35.862788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:34.108 [2024-12-09 14:17:35.862794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:34.108 [2024-12-09 14:17:35.862800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:34.108 [2024-12-09 14:17:35.862814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:34.108 [2024-12-09 14:17:35.862820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:34.108 [2024-12-09 14:17:35.862833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.108 [2024-12-09 14:17:35.862846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:34.108 [2024-12-09 14:17:35.862852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.108 [2024-12-09 14:17:35.862864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:34.108 [2024-12-09 14:17:35.862870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.108 [2024-12-09 14:17:35.862883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:34.108 [2024-12-09 14:17:35.862889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.108 [2024-12-09 14:17:35.862902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:34.108 [2024-12-09 14:17:35.862908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.108 [2024-12-09 14:17:35.862921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:34.108 [2024-12-09 14:17:35.862927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:34.108 [2024-12-09 14:17:35.862933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.108 [2024-12-09 14:17:35.862939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:34.108 [2024-12-09 14:17:35.862945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:34.108 [2024-12-09 14:17:35.862951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:34.108 [2024-12-09 14:17:35.862963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:34.108 [2024-12-09 14:17:35.862969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.108 [2024-12-09 14:17:35.862975] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:34.108 [2024-12-09 14:17:35.862983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:34.108 [2024-12-09 14:17:35.862989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.108 [2024-12-09 14:17:35.862995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.108 [2024-12-09 14:17:35.863002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:34.108 [2024-12-09 14:17:35.863009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:34.108 [2024-12-09 14:17:35.863017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:34.108 [2024-12-09 14:17:35.863023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:34.108 [2024-12-09 14:17:35.863030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:34.108 [2024-12-09 14:17:35.863036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:34.108 [2024-12-09 14:17:35.863044] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:34.108 [2024-12-09 14:17:35.863052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.108 [2024-12-09 14:17:35.863062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:34.108 [2024-12-09 14:17:35.863069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:34.108 [2024-12-09 14:17:35.863076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:34.108 [2024-12-09 14:17:35.863083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:34.108 [2024-12-09 14:17:35.863090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:34.108 [2024-12-09 14:17:35.863097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:34.108 [2024-12-09 14:17:35.863103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:34.108 [2024-12-09 14:17:35.863110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:34.108 [2024-12-09 14:17:35.863117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:34.108 [2024-12-09 14:17:35.863124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:34.108 [2024-12-09 14:17:35.863131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:34.108 [2024-12-09 14:17:35.863138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:34.108 [2024-12-09 14:17:35.863145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:34.108 [2024-12-09 14:17:35.863152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:34.108 [2024-12-09 14:17:35.863159] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:34.108 [2024-12-09 14:17:35.863167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.108 [2024-12-09 14:17:35.863174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:34.108 [2024-12-09 14:17:35.863181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:34.108 [2024-12-09 14:17:35.863188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:34.108 [2024-12-09 14:17:35.863196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:34.108 [2024-12-09 14:17:35.863203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.108 [2024-12-09 14:17:35.863210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:34.108 [2024-12-09 14:17:35.863217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:24:34.108 [2024-12-09 14:17:35.863224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.108 [2024-12-09 14:17:35.888367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.109 [2024-12-09 14:17:35.888403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:34.109 [2024-12-09 14:17:35.888413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.100 ms 00:24:34.109 [2024-12-09 14:17:35.888424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.109 [2024-12-09 14:17:35.888507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.109 [2024-12-09 14:17:35.888515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:34.109 [2024-12-09 14:17:35.888523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:34.109 [2024-12-09 14:17:35.888530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.367 [2024-12-09 14:17:35.927250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.367 [2024-12-09 14:17:35.927300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:34.367 [2024-12-09 14:17:35.927313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.657 ms 00:24:34.367 [2024-12-09 14:17:35.927321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.367 [2024-12-09 14:17:35.927373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.367 [2024-12-09 14:17:35.927383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:34.367 [2024-12-09 14:17:35.927395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:34.367 [2024-12-09 14:17:35.927403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.367 [2024-12-09 14:17:35.927781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:35.927807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:34.368 [2024-12-09 14:17:35.927817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:24:34.368 [2024-12-09 14:17:35.927824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:35.927949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:35.927964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:34.368 [2024-12-09 14:17:35.927973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:34.368 [2024-12-09 14:17:35.927985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:35.940771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:35.940803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:34.368 [2024-12-09 14:17:35.940815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.766 ms 00:24:34.368 [2024-12-09 14:17:35.940822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:35.953164] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:24:34.368 [2024-12-09 14:17:35.953199] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:34.368 [2024-12-09 14:17:35.953211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:35.953219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:34.368 [2024-12-09 14:17:35.953228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.276 ms 00:24:34.368 [2024-12-09 14:17:35.953235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:35.977332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:35.977365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:34.368 [2024-12-09 14:17:35.977376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.058 ms 00:24:34.368 [2024-12-09 14:17:35.977384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:35.988912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:35.988944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:34.368 [2024-12-09 14:17:35.988953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.492 ms 00:24:34.368 [2024-12-09 14:17:35.988960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.000091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:36.000122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:34.368 [2024-12-09 14:17:36.000131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.101 ms 00:24:34.368 [2024-12-09 14:17:36.000139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.000740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:36.000765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:34.368 [2024-12-09 14:17:36.000776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:24:34.368 [2024-12-09 14:17:36.000783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.054709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:36.054757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:34.368 [2024-12-09 14:17:36.054775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.908 ms 00:24:34.368 [2024-12-09 14:17:36.054784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.064911] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:34.368 [2024-12-09 14:17:36.067067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:36.067097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:34.368 [2024-12-09 14:17:36.067108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.232 ms 00:24:34.368 [2024-12-09 14:17:36.067116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.067203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:36.067214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:34.368 [2024-12-09 14:17:36.067227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:34.368 [2024-12-09 14:17:36.067235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.067302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:36.067313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:34.368 [2024-12-09 14:17:36.067321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:34.368 [2024-12-09 14:17:36.067330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.067349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:36.067358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:34.368 [2024-12-09 14:17:36.067366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:34.368 [2024-12-09 14:17:36.067372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.067404] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:34.368 [2024-12-09 14:17:36.067415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:36.067422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:34.368 [2024-12-09 14:17:36.067430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:34.368 [2024-12-09 14:17:36.067437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.090369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:36.090412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:34.368 [2024-12-09 14:17:36.090427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.915 ms 00:24:34.368 [2024-12-09 14:17:36.090435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.090498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.368 [2024-12-09 14:17:36.090507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:34.368 [2024-12-09 14:17:36.090515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:34.368 [2024-12-09 14:17:36.090523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.368 [2024-12-09 14:17:36.091506] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 251.121 ms, result 0 00:24:35.743  [2024-12-09T14:17:38.491Z] Copying: 1072/1048576 [kB] (1072 kBps) [2024-12-09T14:17:39.430Z] Copying: 20/1024 [MB] (19 MBps) [2024-12-09T14:17:40.370Z] Copying: 39/1024 [MB] (18 MBps) [2024-12-09T14:17:41.310Z] Copying: 55/1024 [MB] (15 MBps) [2024-12-09T14:17:42.691Z] Copying: 78/1024 [MB] (23 MBps) [2024-12-09T14:17:43.630Z] Copying: 94/1024 [MB] (16 MBps) [2024-12-09T14:17:44.570Z] Copying: 114/1024 [MB] (19 MBps) [2024-12-09T14:17:45.511Z] Copying: 134/1024 [MB] (19 MBps) [2024-12-09T14:17:46.453Z] Copying: 152/1024 [MB] (18 MBps) [2024-12-09T14:17:47.393Z] Copying: 170/1024 [MB] (18 MBps) [2024-12-09T14:17:48.335Z] Copying: 194/1024 [MB] (23 MBps) [2024-12-09T14:17:49.303Z] Copying: 216/1024 [MB] (22 MBps) [2024-12-09T14:17:50.687Z] Copying: 236/1024 [MB] (19 MBps) [2024-12-09T14:17:51.625Z] Copying: 261/1024 [MB] (25 MBps) [2024-12-09T14:17:52.566Z] Copying: 287/1024 [MB] (25 MBps) [2024-12-09T14:17:53.510Z] Copying: 307/1024 [MB] (20 MBps) [2024-12-09T14:17:54.452Z] Copying: 319/1024 [MB] (11 MBps) [2024-12-09T14:17:55.392Z] Copying: 335/1024 [MB] (16 MBps) [2024-12-09T14:17:56.335Z] Copying: 357/1024 [MB] (21 MBps) [2024-12-09T14:17:57.279Z] Copying: 368/1024 [MB] (11 MBps) [2024-12-09T14:17:58.666Z] Copying: 378/1024 [MB] (10 MBps) [2024-12-09T14:17:59.609Z] Copying: 389/1024 [MB] (10 MBps) [2024-12-09T14:18:00.553Z] Copying: 399/1024 [MB] (10 MBps) [2024-12-09T14:18:01.573Z] Copying: 413/1024 [MB] (13 MBps) [2024-12-09T14:18:02.513Z] Copying: 432/1024 [MB] (19 MBps) [2024-12-09T14:18:03.461Z] Copying: 445/1024 [MB] (12 MBps) [2024-12-09T14:18:04.403Z] Copying: 458/1024 [MB] (13 MBps) [2024-12-09T14:18:05.341Z] Copying: 475/1024 [MB] (16 MBps) [2024-12-09T14:18:06.283Z] Copying: 499/1024 [MB] (24 MBps) [2024-12-09T14:18:07.665Z] Copying: 521/1024 [MB] (21 MBps) [2024-12-09T14:18:08.611Z] Copying: 542/1024 [MB] (20 MBps) [2024-12-09T14:18:09.556Z] Copying: 559/1024 [MB] (16 MBps) [2024-12-09T14:18:10.501Z] Copying: 573/1024 [MB] (14 MBps) [2024-12-09T14:18:11.444Z] Copying: 583/1024 [MB] (10 MBps) [2024-12-09T14:18:12.388Z] Copying: 593/1024 [MB] (10 MBps) [2024-12-09T14:18:13.329Z] Copying: 611/1024 [MB] (17 MBps) [2024-12-09T14:18:14.714Z] Copying: 625/1024 [MB] (13 MBps) [2024-12-09T14:18:15.300Z] Copying: 642/1024 [MB] (17 MBps) [2024-12-09T14:18:16.687Z] Copying: 659/1024 [MB] (17 MBps) [2024-12-09T14:18:17.632Z] Copying: 674/1024 [MB] (14 MBps) [2024-12-09T14:18:18.573Z] Copying: 691/1024 [MB] (17 MBps) [2024-12-09T14:18:19.517Z] Copying: 703/1024 [MB] (12 MBps) [2024-12-09T14:18:20.462Z] Copying: 719/1024 [MB] (15 MBps) [2024-12-09T14:18:21.406Z] Copying: 733/1024 [MB] (14 MBps) [2024-12-09T14:18:22.349Z] Copying: 746/1024 [MB] (12 MBps) [2024-12-09T14:18:23.293Z] Copying: 764/1024 [MB] (18 MBps) [2024-12-09T14:18:24.703Z] Copying: 778/1024 [MB] (14 MBps) [2024-12-09T14:18:25.280Z] Copying: 795/1024 [MB] (17 MBps) [2024-12-09T14:18:26.666Z] Copying: 813/1024 [MB] (17 MBps) [2024-12-09T14:18:27.610Z] Copying: 827/1024 [MB] (13 MBps) [2024-12-09T14:18:28.553Z] Copying: 838/1024 [MB] (11 MBps) [2024-12-09T14:18:29.497Z] Copying: 849/1024 [MB] (10 MBps) [2024-12-09T14:18:30.441Z] Copying: 860/1024 [MB] (11 MBps) [2024-12-09T14:18:31.383Z] Copying: 870/1024 [MB] (10 MBps) [2024-12-09T14:18:32.326Z] Copying: 881/1024 [MB] (10 MBps) [2024-12-09T14:18:33.715Z] Copying: 892/1024 [MB] (10 MBps) [2024-12-09T14:18:34.289Z] Copying: 903/1024 [MB] (10 MBps) [2024-12-09T14:18:35.676Z] Copying: 916/1024 [MB] (13 MBps) [2024-12-09T14:18:36.620Z] Copying: 928/1024 [MB] (11 MBps) [2024-12-09T14:18:37.567Z] Copying: 940/1024 [MB] (11 MBps) [2024-12-09T14:18:38.511Z] Copying: 952/1024 [MB] (12 MBps) [2024-12-09T14:18:39.456Z] Copying: 963/1024 [MB] (10 MBps) [2024-12-09T14:18:40.398Z] Copying: 974/1024 [MB] (10 MBps) [2024-12-09T14:18:41.340Z] Copying: 987/1024 [MB] (12 MBps) [2024-12-09T14:18:42.282Z] Copying: 1000/1024 [MB] (13 MBps) [2024-12-09T14:18:42.852Z] Copying: 1019/1024 [MB] (18 MBps) [2024-12-09T14:18:42.852Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-09 14:18:42.676003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.058 [2024-12-09 14:18:42.676078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:41.058 [2024-12-09 14:18:42.676108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:41.059 [2024-12-09 14:18:42.676122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.059 [2024-12-09 14:18:42.676159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:41.059 [2024-12-09 14:18:42.678880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.059 [2024-12-09 14:18:42.678912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:41.059 [2024-12-09 14:18:42.678922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.700 ms 00:25:41.059 [2024-12-09 14:18:42.678929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.059 [2024-12-09 14:18:42.679144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.059 [2024-12-09 14:18:42.679153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:41.059 [2024-12-09 14:18:42.679162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:25:41.059 [2024-12-09 14:18:42.679173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.059 [2024-12-09 14:18:42.692773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.059 [2024-12-09 14:18:42.692813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:41.059 [2024-12-09 14:18:42.692826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.584 ms 00:25:41.059 [2024-12-09 14:18:42.692835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.059 [2024-12-09 14:18:42.699098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.059 [2024-12-09 14:18:42.699126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:41.059 [2024-12-09 14:18:42.699137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.234 ms 00:25:41.059 [2024-12-09 14:18:42.699149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.059 [2024-12-09 14:18:42.722400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.059 [2024-12-09 14:18:42.722433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:41.059 [2024-12-09 14:18:42.722443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.206 ms 00:25:41.059 [2024-12-09 14:18:42.722450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.059 [2024-12-09 14:18:42.736347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.059 [2024-12-09 14:18:42.736379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:41.059 [2024-12-09 14:18:42.736390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.866 ms 00:25:41.059 [2024-12-09 14:18:42.736397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.321 [2024-12-09 14:18:42.893016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.321 [2024-12-09 14:18:42.893053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:41.321 [2024-12-09 14:18:42.893063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 156.585 ms 00:25:41.321 [2024-12-09 14:18:42.893071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.321 [2024-12-09 14:18:42.916712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.321 [2024-12-09 14:18:42.916742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:41.321 [2024-12-09 14:18:42.916752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.627 ms 00:25:41.321 [2024-12-09 14:18:42.916760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.321 [2024-12-09 14:18:42.939809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.321 [2024-12-09 14:18:42.939840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:41.321 [2024-12-09 14:18:42.939850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.019 ms 00:25:41.321 [2024-12-09 14:18:42.939857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.321 [2024-12-09 14:18:42.962514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.321 [2024-12-09 14:18:42.962566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:41.321 [2024-12-09 14:18:42.962576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.627 ms 00:25:41.321 [2024-12-09 14:18:42.962583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.321 [2024-12-09 14:18:42.985515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.321 [2024-12-09 14:18:42.985551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:41.321 [2024-12-09 14:18:42.985560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.882 ms 00:25:41.321 [2024-12-09 14:18:42.985567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.322 [2024-12-09 14:18:42.985596] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:41.322 [2024-12-09 14:18:42.985609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131584 / 261120 wr_cnt: 1 state: open 00:25:41.322 [2024-12-09 14:18:42.985619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.985995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:41.322 [2024-12-09 14:18:42.986259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:41.323 [2024-12-09 14:18:42.986355] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:41.323 [2024-12-09 14:18:42.986362] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05373d07-4b4e-457d-b347-de8cd136f1a9 00:25:41.323 [2024-12-09 14:18:42.986371] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131584 00:25:41.323 [2024-12-09 14:18:42.986378] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132544 00:25:41.323 [2024-12-09 14:18:42.986389] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131584 00:25:41.323 [2024-12-09 14:18:42.986397] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:25:41.323 [2024-12-09 14:18:42.986408] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:41.323 [2024-12-09 14:18:42.986421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:41.323 [2024-12-09 14:18:42.986429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:41.323 [2024-12-09 14:18:42.986435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:41.323 [2024-12-09 14:18:42.986441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:41.323 [2024-12-09 14:18:42.986448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.323 [2024-12-09 14:18:42.986455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:41.323 [2024-12-09 14:18:42.986463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:25:41.323 [2024-12-09 14:18:42.986470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.323 [2024-12-09 14:18:42.998904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.323 [2024-12-09 14:18:42.998934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:41.323 [2024-12-09 14:18:42.998948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.420 ms 00:25:41.323 [2024-12-09 14:18:42.998956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.323 [2024-12-09 14:18:42.999303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.323 [2024-12-09 14:18:42.999312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:41.323 [2024-12-09 14:18:42.999320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:25:41.323 [2024-12-09 14:18:42.999327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.323 [2024-12-09 14:18:43.032131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.323 [2024-12-09 14:18:43.032165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:41.323 [2024-12-09 14:18:43.032174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.323 [2024-12-09 14:18:43.032181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.323 [2024-12-09 14:18:43.032230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.323 [2024-12-09 14:18:43.032238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:41.323 [2024-12-09 14:18:43.032246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.323 [2024-12-09 14:18:43.032253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.323 [2024-12-09 14:18:43.032298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.323 [2024-12-09 14:18:43.032308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:41.323 [2024-12-09 14:18:43.032319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.323 [2024-12-09 14:18:43.032326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.323 [2024-12-09 14:18:43.032340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.323 [2024-12-09 14:18:43.032348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:41.323 [2024-12-09 14:18:43.032355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.323 [2024-12-09 14:18:43.032362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.323 [2024-12-09 14:18:43.109854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.323 [2024-12-09 14:18:43.109894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.323 [2024-12-09 14:18:43.109910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.323 [2024-12-09 14:18:43.109917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.584 [2024-12-09 14:18:43.173975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.584 [2024-12-09 14:18:43.174019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.584 [2024-12-09 14:18:43.174029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.584 [2024-12-09 14:18:43.174037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.584 [2024-12-09 14:18:43.174102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.584 [2024-12-09 14:18:43.174112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.584 [2024-12-09 14:18:43.174120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.584 [2024-12-09 14:18:43.174132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.584 [2024-12-09 14:18:43.174165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.584 [2024-12-09 14:18:43.174174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.584 [2024-12-09 14:18:43.174182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.584 [2024-12-09 14:18:43.174190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.584 [2024-12-09 14:18:43.174272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.584 [2024-12-09 14:18:43.174282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.584 [2024-12-09 14:18:43.174290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.584 [2024-12-09 14:18:43.174298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.584 [2024-12-09 14:18:43.174331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.584 [2024-12-09 14:18:43.174341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:41.584 [2024-12-09 14:18:43.174348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.584 [2024-12-09 14:18:43.174355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.584 [2024-12-09 14:18:43.174388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.584 [2024-12-09 14:18:43.174397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.584 [2024-12-09 14:18:43.174405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.584 [2024-12-09 14:18:43.174413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.584 [2024-12-09 14:18:43.174451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.584 [2024-12-09 14:18:43.174461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.584 [2024-12-09 14:18:43.174468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.584 [2024-12-09 14:18:43.174475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.584 [2024-12-09 14:18:43.174607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 498.584 ms, result 0 00:25:42.163 00:25:42.163 00:25:42.163 14:18:43 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:44.764 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:44.764 Process with pid 77297 is not found 00:25:44.764 Remove shared memory files 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77297 00:25:44.764 14:18:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77297 ']' 00:25:44.764 14:18:46 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77297 00:25:44.764 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77297) - No such process 00:25:44.764 14:18:46 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77297 is not found' 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:44.764 14:18:46 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:44.764 00:25:44.764 real 4m31.547s 00:25:44.764 user 4m21.263s 00:25:44.764 sys 0m10.706s 00:25:44.764 14:18:46 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:44.764 14:18:46 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:44.764 ************************************ 00:25:44.764 END TEST ftl_restore 00:25:44.764 ************************************ 00:25:44.764 14:18:46 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:44.764 14:18:46 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:44.764 14:18:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:44.764 14:18:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:44.764 ************************************ 00:25:44.764 START TEST ftl_dirty_shutdown 00:25:44.764 ************************************ 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:44.764 * Looking for test storage... 00:25:44.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:44.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.764 --rc genhtml_branch_coverage=1 00:25:44.764 --rc genhtml_function_coverage=1 00:25:44.764 --rc genhtml_legend=1 00:25:44.764 --rc geninfo_all_blocks=1 00:25:44.764 --rc geninfo_unexecuted_blocks=1 00:25:44.764 00:25:44.764 ' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:44.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.764 --rc genhtml_branch_coverage=1 00:25:44.764 --rc genhtml_function_coverage=1 00:25:44.764 --rc genhtml_legend=1 00:25:44.764 --rc geninfo_all_blocks=1 00:25:44.764 --rc geninfo_unexecuted_blocks=1 00:25:44.764 00:25:44.764 ' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:44.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.764 --rc genhtml_branch_coverage=1 00:25:44.764 --rc genhtml_function_coverage=1 00:25:44.764 --rc genhtml_legend=1 00:25:44.764 --rc geninfo_all_blocks=1 00:25:44.764 --rc geninfo_unexecuted_blocks=1 00:25:44.764 00:25:44.764 ' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:44.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.764 --rc genhtml_branch_coverage=1 00:25:44.764 --rc genhtml_function_coverage=1 00:25:44.764 --rc genhtml_legend=1 00:25:44.764 --rc geninfo_all_blocks=1 00:25:44.764 --rc geninfo_unexecuted_blocks=1 00:25:44.764 00:25:44.764 ' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:44.764 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80175 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80175 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80175 ']' 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.765 14:18:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:45.026 [2024-12-09 14:18:46.566998] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:25:45.026 [2024-12-09 14:18:46.567171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80175 ] 00:25:45.026 [2024-12-09 14:18:46.730193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.287 [2024-12-09 14:18:46.848232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.860 14:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.860 14:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:45.860 14:18:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:45.860 14:18:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:45.860 14:18:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:45.860 14:18:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:45.860 14:18:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:45.860 14:18:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:46.120 14:18:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:46.120 14:18:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:46.120 14:18:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:46.120 14:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:46.120 14:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:46.120 14:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:46.120 14:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:46.120 14:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:46.380 14:18:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:46.380 { 00:25:46.380 "name": "nvme0n1", 00:25:46.380 "aliases": [ 00:25:46.380 "9432a071-2dec-4062-8689-243464e87629" 00:25:46.380 ], 00:25:46.380 "product_name": "NVMe disk", 00:25:46.380 "block_size": 4096, 00:25:46.380 "num_blocks": 1310720, 00:25:46.380 "uuid": "9432a071-2dec-4062-8689-243464e87629", 00:25:46.380 "numa_id": -1, 00:25:46.380 "assigned_rate_limits": { 00:25:46.380 "rw_ios_per_sec": 0, 00:25:46.380 "rw_mbytes_per_sec": 0, 00:25:46.380 "r_mbytes_per_sec": 0, 00:25:46.380 "w_mbytes_per_sec": 0 00:25:46.380 }, 00:25:46.380 "claimed": true, 00:25:46.380 "claim_type": "read_many_write_one", 00:25:46.380 "zoned": false, 00:25:46.380 "supported_io_types": { 00:25:46.380 "read": true, 00:25:46.380 "write": true, 00:25:46.380 "unmap": true, 00:25:46.380 "flush": true, 00:25:46.380 "reset": true, 00:25:46.380 "nvme_admin": true, 00:25:46.380 "nvme_io": true, 00:25:46.380 "nvme_io_md": false, 00:25:46.380 "write_zeroes": true, 00:25:46.380 "zcopy": false, 00:25:46.380 "get_zone_info": false, 00:25:46.380 "zone_management": false, 00:25:46.380 "zone_append": false, 00:25:46.380 "compare": true, 00:25:46.380 "compare_and_write": false, 00:25:46.380 "abort": true, 00:25:46.380 "seek_hole": false, 00:25:46.380 "seek_data": false, 00:25:46.380 "copy": true, 00:25:46.380 "nvme_iov_md": false 00:25:46.380 }, 00:25:46.380 "driver_specific": { 00:25:46.380 "nvme": [ 00:25:46.381 { 00:25:46.381 "pci_address": "0000:00:11.0", 00:25:46.381 "trid": { 00:25:46.381 "trtype": "PCIe", 00:25:46.381 "traddr": "0000:00:11.0" 00:25:46.381 }, 00:25:46.381 "ctrlr_data": { 00:25:46.381 "cntlid": 0, 00:25:46.381 "vendor_id": "0x1b36", 00:25:46.381 "model_number": "QEMU NVMe Ctrl", 00:25:46.381 "serial_number": "12341", 00:25:46.381 "firmware_revision": "8.0.0", 00:25:46.381 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:46.381 "oacs": { 00:25:46.381 "security": 0, 00:25:46.381 "format": 1, 00:25:46.381 "firmware": 0, 00:25:46.381 "ns_manage": 1 00:25:46.381 }, 00:25:46.381 "multi_ctrlr": false, 00:25:46.381 "ana_reporting": false 00:25:46.381 }, 00:25:46.381 "vs": { 00:25:46.381 "nvme_version": "1.4" 00:25:46.381 }, 00:25:46.381 "ns_data": { 00:25:46.381 "id": 1, 00:25:46.381 "can_share": false 00:25:46.381 } 00:25:46.381 } 00:25:46.381 ], 00:25:46.381 "mp_policy": "active_passive" 00:25:46.381 } 00:25:46.381 } 00:25:46.381 ]' 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:46.381 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:46.642 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=f29795b2-f781-4836-b1e0-de9a6d56367e 00:25:46.642 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:46.642 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f29795b2-f781-4836-b1e0-de9a6d56367e 00:25:46.902 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:47.163 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=7da6cf08-9e37-427d-bbbd-ece54f2bee80 00:25:47.163 14:18:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7da6cf08-9e37-427d-bbbd-ece54f2bee80 00:25:47.434 14:18:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=78753914-7c2c-481c-a73a-60eb2ea78490 00:25:47.434 14:18:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 78753914-7c2c-481c-a73a-60eb2ea78490 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=78753914-7c2c-481c-a73a-60eb2ea78490 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 78753914-7c2c-481c-a73a-60eb2ea78490 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=78753914-7c2c-481c-a73a-60eb2ea78490 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:47.695 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78753914-7c2c-481c-a73a-60eb2ea78490 00:25:47.696 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:47.696 { 00:25:47.696 "name": "78753914-7c2c-481c-a73a-60eb2ea78490", 00:25:47.696 "aliases": [ 00:25:47.696 "lvs/nvme0n1p0" 00:25:47.696 ], 00:25:47.696 "product_name": "Logical Volume", 00:25:47.696 "block_size": 4096, 00:25:47.696 "num_blocks": 26476544, 00:25:47.696 "uuid": "78753914-7c2c-481c-a73a-60eb2ea78490", 00:25:47.696 "assigned_rate_limits": { 00:25:47.696 "rw_ios_per_sec": 0, 00:25:47.696 "rw_mbytes_per_sec": 0, 00:25:47.696 "r_mbytes_per_sec": 0, 00:25:47.696 "w_mbytes_per_sec": 0 00:25:47.696 }, 00:25:47.696 "claimed": false, 00:25:47.696 "zoned": false, 00:25:47.696 "supported_io_types": { 00:25:47.696 "read": true, 00:25:47.696 "write": true, 00:25:47.696 "unmap": true, 00:25:47.696 "flush": false, 00:25:47.696 "reset": true, 00:25:47.696 "nvme_admin": false, 00:25:47.696 "nvme_io": false, 00:25:47.696 "nvme_io_md": false, 00:25:47.696 "write_zeroes": true, 00:25:47.696 "zcopy": false, 00:25:47.696 "get_zone_info": false, 00:25:47.696 "zone_management": false, 00:25:47.696 "zone_append": false, 00:25:47.696 "compare": false, 00:25:47.696 "compare_and_write": false, 00:25:47.696 "abort": false, 00:25:47.696 "seek_hole": true, 00:25:47.696 "seek_data": true, 00:25:47.696 "copy": false, 00:25:47.696 "nvme_iov_md": false 00:25:47.696 }, 00:25:47.696 "driver_specific": { 00:25:47.696 "lvol": { 00:25:47.696 "lvol_store_uuid": "7da6cf08-9e37-427d-bbbd-ece54f2bee80", 00:25:47.696 "base_bdev": "nvme0n1", 00:25:47.696 "thin_provision": true, 00:25:47.696 "num_allocated_clusters": 0, 00:25:47.696 "snapshot": false, 00:25:47.696 "clone": false, 00:25:47.696 "esnap_clone": false 00:25:47.696 } 00:25:47.696 } 00:25:47.696 } 00:25:47.696 ]' 00:25:47.696 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:47.696 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:47.696 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:47.696 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:47.696 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:47.696 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:47.696 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:47.696 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:47.696 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:47.955 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:47.955 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:47.955 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 78753914-7c2c-481c-a73a-60eb2ea78490 00:25:47.955 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=78753914-7c2c-481c-a73a-60eb2ea78490 00:25:47.955 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:47.955 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:47.955 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:47.955 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78753914-7c2c-481c-a73a-60eb2ea78490 00:25:48.213 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:48.213 { 00:25:48.213 "name": "78753914-7c2c-481c-a73a-60eb2ea78490", 00:25:48.213 "aliases": [ 00:25:48.213 "lvs/nvme0n1p0" 00:25:48.213 ], 00:25:48.213 "product_name": "Logical Volume", 00:25:48.213 "block_size": 4096, 00:25:48.213 "num_blocks": 26476544, 00:25:48.213 "uuid": "78753914-7c2c-481c-a73a-60eb2ea78490", 00:25:48.213 "assigned_rate_limits": { 00:25:48.213 "rw_ios_per_sec": 0, 00:25:48.213 "rw_mbytes_per_sec": 0, 00:25:48.213 "r_mbytes_per_sec": 0, 00:25:48.213 "w_mbytes_per_sec": 0 00:25:48.213 }, 00:25:48.213 "claimed": false, 00:25:48.213 "zoned": false, 00:25:48.213 "supported_io_types": { 00:25:48.213 "read": true, 00:25:48.213 "write": true, 00:25:48.213 "unmap": true, 00:25:48.213 "flush": false, 00:25:48.213 "reset": true, 00:25:48.213 "nvme_admin": false, 00:25:48.213 "nvme_io": false, 00:25:48.213 "nvme_io_md": false, 00:25:48.213 "write_zeroes": true, 00:25:48.213 "zcopy": false, 00:25:48.213 "get_zone_info": false, 00:25:48.213 "zone_management": false, 00:25:48.213 "zone_append": false, 00:25:48.213 "compare": false, 00:25:48.213 "compare_and_write": false, 00:25:48.213 "abort": false, 00:25:48.213 "seek_hole": true, 00:25:48.213 "seek_data": true, 00:25:48.213 "copy": false, 00:25:48.213 "nvme_iov_md": false 00:25:48.213 }, 00:25:48.213 "driver_specific": { 00:25:48.213 "lvol": { 00:25:48.213 "lvol_store_uuid": "7da6cf08-9e37-427d-bbbd-ece54f2bee80", 00:25:48.213 "base_bdev": "nvme0n1", 00:25:48.213 "thin_provision": true, 00:25:48.213 "num_allocated_clusters": 0, 00:25:48.213 "snapshot": false, 00:25:48.213 "clone": false, 00:25:48.213 "esnap_clone": false 00:25:48.213 } 00:25:48.213 } 00:25:48.213 } 00:25:48.213 ]' 00:25:48.213 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:48.213 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:48.213 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:48.213 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:48.213 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:48.213 14:18:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:48.213 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:48.213 14:18:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:48.471 14:18:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:48.472 14:18:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 78753914-7c2c-481c-a73a-60eb2ea78490 00:25:48.472 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=78753914-7c2c-481c-a73a-60eb2ea78490 00:25:48.472 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:48.472 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:48.472 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:48.472 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78753914-7c2c-481c-a73a-60eb2ea78490 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:48.730 { 00:25:48.730 "name": "78753914-7c2c-481c-a73a-60eb2ea78490", 00:25:48.730 "aliases": [ 00:25:48.730 "lvs/nvme0n1p0" 00:25:48.730 ], 00:25:48.730 "product_name": "Logical Volume", 00:25:48.730 "block_size": 4096, 00:25:48.730 "num_blocks": 26476544, 00:25:48.730 "uuid": "78753914-7c2c-481c-a73a-60eb2ea78490", 00:25:48.730 "assigned_rate_limits": { 00:25:48.730 "rw_ios_per_sec": 0, 00:25:48.730 "rw_mbytes_per_sec": 0, 00:25:48.730 "r_mbytes_per_sec": 0, 00:25:48.730 "w_mbytes_per_sec": 0 00:25:48.730 }, 00:25:48.730 "claimed": false, 00:25:48.730 "zoned": false, 00:25:48.730 "supported_io_types": { 00:25:48.730 "read": true, 00:25:48.730 "write": true, 00:25:48.730 "unmap": true, 00:25:48.730 "flush": false, 00:25:48.730 "reset": true, 00:25:48.730 "nvme_admin": false, 00:25:48.730 "nvme_io": false, 00:25:48.730 "nvme_io_md": false, 00:25:48.730 "write_zeroes": true, 00:25:48.730 "zcopy": false, 00:25:48.730 "get_zone_info": false, 00:25:48.730 "zone_management": false, 00:25:48.730 "zone_append": false, 00:25:48.730 "compare": false, 00:25:48.730 "compare_and_write": false, 00:25:48.730 "abort": false, 00:25:48.730 "seek_hole": true, 00:25:48.730 "seek_data": true, 00:25:48.730 "copy": false, 00:25:48.730 "nvme_iov_md": false 00:25:48.730 }, 00:25:48.730 "driver_specific": { 00:25:48.730 "lvol": { 00:25:48.730 "lvol_store_uuid": "7da6cf08-9e37-427d-bbbd-ece54f2bee80", 00:25:48.730 "base_bdev": "nvme0n1", 00:25:48.730 "thin_provision": true, 00:25:48.730 "num_allocated_clusters": 0, 00:25:48.730 "snapshot": false, 00:25:48.730 "clone": false, 00:25:48.730 "esnap_clone": false 00:25:48.730 } 00:25:48.730 } 00:25:48.730 } 00:25:48.730 ]' 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 78753914-7c2c-481c-a73a-60eb2ea78490 --l2p_dram_limit 10' 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:48.730 14:18:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 78753914-7c2c-481c-a73a-60eb2ea78490 --l2p_dram_limit 10 -c nvc0n1p0 00:25:48.991 [2024-12-09 14:18:50.576699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-12-09 14:18:50.576737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:48.991 [2024-12-09 14:18:50.576750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:48.991 [2024-12-09 14:18:50.576756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-12-09 14:18:50.576800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-12-09 14:18:50.576809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:48.991 [2024-12-09 14:18:50.576816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:48.991 [2024-12-09 14:18:50.576822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-12-09 14:18:50.576840] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:48.991 [2024-12-09 14:18:50.577383] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:48.991 [2024-12-09 14:18:50.577405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-12-09 14:18:50.577412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:48.991 [2024-12-09 14:18:50.577419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:25:48.991 [2024-12-09 14:18:50.577425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-12-09 14:18:50.577509] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2a702aa5-c8aa-46c4-9572-8d77a33e3e76 00:25:48.991 [2024-12-09 14:18:50.578417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-12-09 14:18:50.578441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:48.991 [2024-12-09 14:18:50.578449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:48.991 [2024-12-09 14:18:50.578456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-12-09 14:18:50.582995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-12-09 14:18:50.583025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:48.991 [2024-12-09 14:18:50.583032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.506 ms 00:25:48.991 [2024-12-09 14:18:50.583039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-12-09 14:18:50.583103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-12-09 14:18:50.583113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:48.991 [2024-12-09 14:18:50.583119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:48.991 [2024-12-09 14:18:50.583128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-12-09 14:18:50.583165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.991 [2024-12-09 14:18:50.583174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:48.991 [2024-12-09 14:18:50.583182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:48.991 [2024-12-09 14:18:50.583189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.991 [2024-12-09 14:18:50.583205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:48.991 [2024-12-09 14:18:50.586063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.992 [2024-12-09 14:18:50.586089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:48.992 [2024-12-09 14:18:50.586098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.861 ms 00:25:48.992 [2024-12-09 14:18:50.586104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.992 [2024-12-09 14:18:50.586135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.992 [2024-12-09 14:18:50.586142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:48.992 [2024-12-09 14:18:50.586150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:48.992 [2024-12-09 14:18:50.586156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.992 [2024-12-09 14:18:50.586175] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:48.992 [2024-12-09 14:18:50.586282] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:48.992 [2024-12-09 14:18:50.586294] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:48.992 [2024-12-09 14:18:50.586302] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:48.992 [2024-12-09 14:18:50.586311] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586317] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586325] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:48.992 [2024-12-09 14:18:50.586331] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:48.992 [2024-12-09 14:18:50.586340] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:48.992 [2024-12-09 14:18:50.586346] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:48.992 [2024-12-09 14:18:50.586353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.992 [2024-12-09 14:18:50.586363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:48.992 [2024-12-09 14:18:50.586370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:25:48.992 [2024-12-09 14:18:50.586375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.992 [2024-12-09 14:18:50.586441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.992 [2024-12-09 14:18:50.586448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:48.992 [2024-12-09 14:18:50.586455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:48.992 [2024-12-09 14:18:50.586460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.992 [2024-12-09 14:18:50.586547] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:48.992 [2024-12-09 14:18:50.586555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:48.992 [2024-12-09 14:18:50.586562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:48.992 [2024-12-09 14:18:50.586581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:48.992 [2024-12-09 14:18:50.586600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:48.992 [2024-12-09 14:18:50.586612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:48.992 [2024-12-09 14:18:50.586617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:48.992 [2024-12-09 14:18:50.586623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:48.992 [2024-12-09 14:18:50.586629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:48.992 [2024-12-09 14:18:50.586635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:48.992 [2024-12-09 14:18:50.586640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:48.992 [2024-12-09 14:18:50.586655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:48.992 [2024-12-09 14:18:50.586673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:48.992 [2024-12-09 14:18:50.586690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:48.992 [2024-12-09 14:18:50.586706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:48.992 [2024-12-09 14:18:50.586722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:48.992 [2024-12-09 14:18:50.586740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:48.992 [2024-12-09 14:18:50.586752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:48.992 [2024-12-09 14:18:50.586757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:48.992 [2024-12-09 14:18:50.586764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:48.992 [2024-12-09 14:18:50.586770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:48.992 [2024-12-09 14:18:50.586776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:48.992 [2024-12-09 14:18:50.586781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:48.992 [2024-12-09 14:18:50.586792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:48.992 [2024-12-09 14:18:50.586798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586803] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:48.992 [2024-12-09 14:18:50.586810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:48.992 [2024-12-09 14:18:50.586815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:48.992 [2024-12-09 14:18:50.586827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:48.992 [2024-12-09 14:18:50.586836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:48.992 [2024-12-09 14:18:50.586841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:48.992 [2024-12-09 14:18:50.586847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:48.992 [2024-12-09 14:18:50.586853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:48.992 [2024-12-09 14:18:50.586860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:48.992 [2024-12-09 14:18:50.586866] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:48.992 [2024-12-09 14:18:50.586876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:48.992 [2024-12-09 14:18:50.586882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:48.992 [2024-12-09 14:18:50.586889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:48.992 [2024-12-09 14:18:50.586894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:48.992 [2024-12-09 14:18:50.586900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:48.992 [2024-12-09 14:18:50.586906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:48.992 [2024-12-09 14:18:50.586912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:48.992 [2024-12-09 14:18:50.586918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:48.992 [2024-12-09 14:18:50.586925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:48.992 [2024-12-09 14:18:50.586930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:48.992 [2024-12-09 14:18:50.586938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:48.992 [2024-12-09 14:18:50.586944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:48.992 [2024-12-09 14:18:50.586951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:48.992 [2024-12-09 14:18:50.586956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:48.992 [2024-12-09 14:18:50.586963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:48.992 [2024-12-09 14:18:50.586969] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:48.992 [2024-12-09 14:18:50.586976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:48.992 [2024-12-09 14:18:50.586982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:48.992 [2024-12-09 14:18:50.586989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:48.992 [2024-12-09 14:18:50.586994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:48.992 [2024-12-09 14:18:50.587001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:48.992 [2024-12-09 14:18:50.587006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.993 [2024-12-09 14:18:50.587013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:48.993 [2024-12-09 14:18:50.587019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:25:48.993 [2024-12-09 14:18:50.587025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.993 [2024-12-09 14:18:50.587062] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:48.993 [2024-12-09 14:18:50.587072] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:53.190 [2024-12-09 14:18:54.505078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.190 [2024-12-09 14:18:54.505162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:53.190 [2024-12-09 14:18:54.505179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3917.999 ms 00:25:53.190 [2024-12-09 14:18:54.505190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.190 [2024-12-09 14:18:54.534139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.190 [2024-12-09 14:18:54.534203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.190 [2024-12-09 14:18:54.534216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.726 ms 00:25:53.190 [2024-12-09 14:18:54.534227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.190 [2024-12-09 14:18:54.534372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.190 [2024-12-09 14:18:54.534386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:53.190 [2024-12-09 14:18:54.534396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:53.190 [2024-12-09 14:18:54.534411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.190 [2024-12-09 14:18:54.568893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.190 [2024-12-09 14:18:54.568950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.190 [2024-12-09 14:18:54.568963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.444 ms 00:25:53.190 [2024-12-09 14:18:54.568974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.190 [2024-12-09 14:18:54.569012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.190 [2024-12-09 14:18:54.569026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.190 [2024-12-09 14:18:54.569036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:53.190 [2024-12-09 14:18:54.569054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.190 [2024-12-09 14:18:54.569709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.190 [2024-12-09 14:18:54.569752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.190 [2024-12-09 14:18:54.569763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:25:53.190 [2024-12-09 14:18:54.569774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.190 [2024-12-09 14:18:54.569895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.190 [2024-12-09 14:18:54.569907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.190 [2024-12-09 14:18:54.569919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:53.190 [2024-12-09 14:18:54.569933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.190 [2024-12-09 14:18:54.587735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.190 [2024-12-09 14:18:54.587791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.190 [2024-12-09 14:18:54.587802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.784 ms 00:25:53.190 [2024-12-09 14:18:54.587813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.190 [2024-12-09 14:18:54.614335] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:53.190 [2024-12-09 14:18:54.618853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.190 [2024-12-09 14:18:54.618902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:53.190 [2024-12-09 14:18:54.618918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.925 ms 00:25:53.190 [2024-12-09 14:18:54.618927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.725450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.725524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:53.191 [2024-12-09 14:18:54.725554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.467 ms 00:25:53.191 [2024-12-09 14:18:54.725564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.725781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.725798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:53.191 [2024-12-09 14:18:54.725813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:25:53.191 [2024-12-09 14:18:54.725821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.752450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.752505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:53.191 [2024-12-09 14:18:54.752521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.565 ms 00:25:53.191 [2024-12-09 14:18:54.752530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.777974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.778027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:53.191 [2024-12-09 14:18:54.778043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.372 ms 00:25:53.191 [2024-12-09 14:18:54.778051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.778688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.778717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:53.191 [2024-12-09 14:18:54.778729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:25:53.191 [2024-12-09 14:18:54.778740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.871761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.871821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:53.191 [2024-12-09 14:18:54.871845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.971 ms 00:25:53.191 [2024-12-09 14:18:54.871854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.900315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.900369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:53.191 [2024-12-09 14:18:54.900386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.361 ms 00:25:53.191 [2024-12-09 14:18:54.900395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.927516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.927575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:53.191 [2024-12-09 14:18:54.927591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.058 ms 00:25:53.191 [2024-12-09 14:18:54.927599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.954776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.954828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:53.191 [2024-12-09 14:18:54.954844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.117 ms 00:25:53.191 [2024-12-09 14:18:54.954851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.954913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.954923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:53.191 [2024-12-09 14:18:54.954939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:53.191 [2024-12-09 14:18:54.954948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.955064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.191 [2024-12-09 14:18:54.955078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:53.191 [2024-12-09 14:18:54.955089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:53.191 [2024-12-09 14:18:54.955096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.191 [2024-12-09 14:18:54.956286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4379.059 ms, result 0 00:25:53.191 { 00:25:53.191 "name": "ftl0", 00:25:53.191 "uuid": "2a702aa5-c8aa-46c4-9572-8d77a33e3e76" 00:25:53.191 } 00:25:53.191 14:18:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:53.191 14:18:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:53.451 14:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:53.451 14:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:53.451 14:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:53.710 /dev/nbd0 00:25:53.710 14:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:53.710 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:53.711 1+0 records in 00:25:53.711 1+0 records out 00:25:53.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017416 s, 23.5 MB/s 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:53.711 14:18:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:53.711 [2024-12-09 14:18:55.489123] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:25:53.711 [2024-12-09 14:18:55.489244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80323 ] 00:25:53.970 [2024-12-09 14:18:55.649003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.970 [2024-12-09 14:18:55.745517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.352  [2024-12-09T14:18:58.085Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-09T14:18:59.019Z] Copying: 386/1024 [MB] (191 MBps) [2024-12-09T14:19:00.391Z] Copying: 583/1024 [MB] (196 MBps) [2024-12-09T14:19:00.970Z] Copying: 800/1024 [MB] (216 MBps) [2024-12-09T14:19:01.552Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:25:59.758 00:25:59.758 14:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:01.657 14:19:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:01.657 [2024-12-09 14:19:03.404947] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:01.657 [2024-12-09 14:19:03.405246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80404 ] 00:26:01.915 [2024-12-09 14:19:03.559647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.915 [2024-12-09 14:19:03.637120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.285  [2024-12-09T14:19:06.010Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-09T14:19:06.943Z] Copying: 43/1024 [MB] (22 MBps) [2024-12-09T14:19:07.876Z] Copying: 67/1024 [MB] (24 MBps) [2024-12-09T14:19:09.247Z] Copying: 96/1024 [MB] (29 MBps) [2024-12-09T14:19:09.832Z] Copying: 128/1024 [MB] (32 MBps) [2024-12-09T14:19:11.204Z] Copying: 163/1024 [MB] (34 MBps) [2024-12-09T14:19:12.136Z] Copying: 197/1024 [MB] (34 MBps) [2024-12-09T14:19:13.068Z] Copying: 232/1024 [MB] (35 MBps) [2024-12-09T14:19:14.001Z] Copying: 262/1024 [MB] (29 MBps) [2024-12-09T14:19:14.933Z] Copying: 293/1024 [MB] (31 MBps) [2024-12-09T14:19:15.871Z] Copying: 325/1024 [MB] (31 MBps) [2024-12-09T14:19:17.239Z] Copying: 360/1024 [MB] (35 MBps) [2024-12-09T14:19:17.823Z] Copying: 390/1024 [MB] (29 MBps) [2024-12-09T14:19:19.203Z] Copying: 420/1024 [MB] (29 MBps) [2024-12-09T14:19:20.135Z] Copying: 449/1024 [MB] (29 MBps) [2024-12-09T14:19:21.065Z] Copying: 480/1024 [MB] (30 MBps) [2024-12-09T14:19:21.997Z] Copying: 513/1024 [MB] (32 MBps) [2024-12-09T14:19:22.929Z] Copying: 544/1024 [MB] (30 MBps) [2024-12-09T14:19:23.862Z] Copying: 576/1024 [MB] (32 MBps) [2024-12-09T14:19:25.234Z] Copying: 609/1024 [MB] (32 MBps) [2024-12-09T14:19:26.165Z] Copying: 640/1024 [MB] (30 MBps) [2024-12-09T14:19:27.097Z] Copying: 674/1024 [MB] (34 MBps) [2024-12-09T14:19:28.029Z] Copying: 705/1024 [MB] (31 MBps) [2024-12-09T14:19:28.962Z] Copying: 735/1024 [MB] (29 MBps) [2024-12-09T14:19:29.893Z] Copying: 765/1024 [MB] (29 MBps) [2024-12-09T14:19:30.825Z] Copying: 796/1024 [MB] (31 MBps) [2024-12-09T14:19:32.197Z] Copying: 826/1024 [MB] (29 MBps) [2024-12-09T14:19:33.130Z] Copying: 858/1024 [MB] (32 MBps) [2024-12-09T14:19:34.061Z] Copying: 888/1024 [MB] (30 MBps) [2024-12-09T14:19:34.993Z] Copying: 918/1024 [MB] (30 MBps) [2024-12-09T14:19:35.945Z] Copying: 949/1024 [MB] (30 MBps) [2024-12-09T14:19:36.877Z] Copying: 966/1024 [MB] (17 MBps) [2024-12-09T14:19:37.810Z] Copying: 1000/1024 [MB] (33 MBps) [2024-12-09T14:19:38.068Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:26:36.274 00:26:36.532 14:19:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:36.532 14:19:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:36.532 14:19:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:36.790 [2024-12-09 14:19:38.466178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.466219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:36.790 [2024-12-09 14:19:38.466230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:36.790 [2024-12-09 14:19:38.466238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.790 [2024-12-09 14:19:38.466258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:36.790 [2024-12-09 14:19:38.468331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.468359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:36.790 [2024-12-09 14:19:38.468369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:26:36.790 [2024-12-09 14:19:38.468375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.790 [2024-12-09 14:19:38.469940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.469967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:36.790 [2024-12-09 14:19:38.469976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:26:36.790 [2024-12-09 14:19:38.469982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.790 [2024-12-09 14:19:38.481268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.481295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:36.790 [2024-12-09 14:19:38.481309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.268 ms 00:26:36.790 [2024-12-09 14:19:38.481315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.790 [2024-12-09 14:19:38.486266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.486293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:36.790 [2024-12-09 14:19:38.486304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:26:36.790 [2024-12-09 14:19:38.486311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.790 [2024-12-09 14:19:38.504453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.504480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:36.790 [2024-12-09 14:19:38.504490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.086 ms 00:26:36.790 [2024-12-09 14:19:38.504496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.790 [2024-12-09 14:19:38.516332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.516362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:36.790 [2024-12-09 14:19:38.516375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.804 ms 00:26:36.790 [2024-12-09 14:19:38.516382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.790 [2024-12-09 14:19:38.516489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.516502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:36.790 [2024-12-09 14:19:38.516510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:36.790 [2024-12-09 14:19:38.516516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.790 [2024-12-09 14:19:38.534305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.534332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:36.790 [2024-12-09 14:19:38.534342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.774 ms 00:26:36.790 [2024-12-09 14:19:38.534348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.790 [2024-12-09 14:19:38.551504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.551530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:36.790 [2024-12-09 14:19:38.551550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.126 ms 00:26:36.790 [2024-12-09 14:19:38.551557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.790 [2024-12-09 14:19:38.568150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.790 [2024-12-09 14:19:38.568176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:36.790 [2024-12-09 14:19:38.568185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.562 ms 00:26:36.790 [2024-12-09 14:19:38.568190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.052 [2024-12-09 14:19:38.584821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.052 [2024-12-09 14:19:38.584846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:37.052 [2024-12-09 14:19:38.584855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.574 ms 00:26:37.052 [2024-12-09 14:19:38.584860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.052 [2024-12-09 14:19:38.584887] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:37.052 [2024-12-09 14:19:38.584898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.584997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:37.052 [2024-12-09 14:19:38.585384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:37.053 [2024-12-09 14:19:38.585568] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:37.053 [2024-12-09 14:19:38.585575] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a702aa5-c8aa-46c4-9572-8d77a33e3e76 00:26:37.053 [2024-12-09 14:19:38.585582] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:37.053 [2024-12-09 14:19:38.585590] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:37.053 [2024-12-09 14:19:38.585597] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:37.053 [2024-12-09 14:19:38.585604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:37.053 [2024-12-09 14:19:38.585609] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:37.053 [2024-12-09 14:19:38.585616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:37.053 [2024-12-09 14:19:38.585621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:37.053 [2024-12-09 14:19:38.585627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:37.053 [2024-12-09 14:19:38.585633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:37.053 [2024-12-09 14:19:38.585639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.053 [2024-12-09 14:19:38.585645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:37.053 [2024-12-09 14:19:38.585652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:26:37.053 [2024-12-09 14:19:38.585657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.595284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.053 [2024-12-09 14:19:38.595310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:37.053 [2024-12-09 14:19:38.595318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.602 ms 00:26:37.053 [2024-12-09 14:19:38.595325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.595606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.053 [2024-12-09 14:19:38.595622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:37.053 [2024-12-09 14:19:38.595630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:26:37.053 [2024-12-09 14:19:38.595636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.628148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.628175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:37.053 [2024-12-09 14:19:38.628185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.628191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.628236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.628243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:37.053 [2024-12-09 14:19:38.628250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.628256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.628332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.628342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:37.053 [2024-12-09 14:19:38.628349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.628355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.628370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.628376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:37.053 [2024-12-09 14:19:38.628383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.628388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.688057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.688091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:37.053 [2024-12-09 14:19:38.688101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.688108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.736941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.736977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:37.053 [2024-12-09 14:19:38.736987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.736993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.737053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.737061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:37.053 [2024-12-09 14:19:38.737070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.737076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.737139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.737147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:37.053 [2024-12-09 14:19:38.737155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.737161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.737232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.737239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:37.053 [2024-12-09 14:19:38.737246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.737254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.737278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.737285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:37.053 [2024-12-09 14:19:38.737293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.737299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.737328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.737335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:37.053 [2024-12-09 14:19:38.737342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.053 [2024-12-09 14:19:38.737349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.053 [2024-12-09 14:19:38.737384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.053 [2024-12-09 14:19:38.737392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:37.054 [2024-12-09 14:19:38.737399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.054 [2024-12-09 14:19:38.737404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.054 [2024-12-09 14:19:38.737505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.300 ms, result 0 00:26:37.054 true 00:26:37.054 14:19:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80175 00:26:37.054 14:19:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80175 00:26:37.054 14:19:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:37.054 [2024-12-09 14:19:38.831129] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:37.054 [2024-12-09 14:19:38.831246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80779 ] 00:26:37.312 [2024-12-09 14:19:38.986945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.312 [2024-12-09 14:19:39.061947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.684  [2024-12-09T14:19:41.412Z] Copying: 252/1024 [MB] (252 MBps) [2024-12-09T14:19:42.347Z] Copying: 510/1024 [MB] (257 MBps) [2024-12-09T14:19:43.280Z] Copying: 767/1024 [MB] (256 MBps) [2024-12-09T14:19:43.280Z] Copying: 1019/1024 [MB] (251 MBps) [2024-12-09T14:19:43.847Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:26:42.053 00:26:42.053 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80175 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:42.053 14:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:42.312 [2024-12-09 14:19:43.886601] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:42.312 [2024-12-09 14:19:43.886718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80832 ] 00:26:42.312 [2024-12-09 14:19:44.042601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.570 [2024-12-09 14:19:44.119255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.570 [2024-12-09 14:19:44.329269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:42.570 [2024-12-09 14:19:44.329317] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:42.831 [2024-12-09 14:19:44.391966] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:42.831 [2024-12-09 14:19:44.392275] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:42.831 [2024-12-09 14:19:44.392390] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:42.831 [2024-12-09 14:19:44.561306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.831 [2024-12-09 14:19:44.561345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:42.832 [2024-12-09 14:19:44.561358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:42.832 [2024-12-09 14:19:44.561369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.561415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.561425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.832 [2024-12-09 14:19:44.561433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:42.832 [2024-12-09 14:19:44.561440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.561457] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:42.832 [2024-12-09 14:19:44.562148] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:42.832 [2024-12-09 14:19:44.562167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.562175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.832 [2024-12-09 14:19:44.562183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:26:42.832 [2024-12-09 14:19:44.562190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.563304] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:42.832 [2024-12-09 14:19:44.576240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.576270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:42.832 [2024-12-09 14:19:44.576281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.938 ms 00:26:42.832 [2024-12-09 14:19:44.576290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.576345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.576354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:42.832 [2024-12-09 14:19:44.576362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:42.832 [2024-12-09 14:19:44.576369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.581465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.581491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.832 [2024-12-09 14:19:44.581506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.040 ms 00:26:42.832 [2024-12-09 14:19:44.581513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.581594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.581603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.832 [2024-12-09 14:19:44.581611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:42.832 [2024-12-09 14:19:44.581620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.581657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.581666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:42.832 [2024-12-09 14:19:44.581674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:42.832 [2024-12-09 14:19:44.581681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.581701] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:42.832 [2024-12-09 14:19:44.585183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.585206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.832 [2024-12-09 14:19:44.585215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.486 ms 00:26:42.832 [2024-12-09 14:19:44.585222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.585254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.585262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:42.832 [2024-12-09 14:19:44.585270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:42.832 [2024-12-09 14:19:44.585279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.585298] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:42.832 [2024-12-09 14:19:44.585317] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:42.832 [2024-12-09 14:19:44.585351] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:42.832 [2024-12-09 14:19:44.585366] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:42.832 [2024-12-09 14:19:44.585469] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:42.832 [2024-12-09 14:19:44.585478] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:42.832 [2024-12-09 14:19:44.585491] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:42.832 [2024-12-09 14:19:44.585501] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:42.832 [2024-12-09 14:19:44.585509] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:42.832 [2024-12-09 14:19:44.585517] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:42.832 [2024-12-09 14:19:44.585524] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:42.832 [2024-12-09 14:19:44.585530] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:42.832 [2024-12-09 14:19:44.585549] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:42.832 [2024-12-09 14:19:44.585557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.585567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:42.832 [2024-12-09 14:19:44.585580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:26:42.832 [2024-12-09 14:19:44.585592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.585677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.832 [2024-12-09 14:19:44.585690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:42.832 [2024-12-09 14:19:44.585698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:42.832 [2024-12-09 14:19:44.585705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.832 [2024-12-09 14:19:44.585818] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:42.832 [2024-12-09 14:19:44.585829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:42.832 [2024-12-09 14:19:44.585837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.832 [2024-12-09 14:19:44.585844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.832 [2024-12-09 14:19:44.585852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:42.832 [2024-12-09 14:19:44.585859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:42.832 [2024-12-09 14:19:44.585866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:42.832 [2024-12-09 14:19:44.585873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:42.832 [2024-12-09 14:19:44.585880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:42.832 [2024-12-09 14:19:44.585892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.832 [2024-12-09 14:19:44.585899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:42.832 [2024-12-09 14:19:44.585905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:42.832 [2024-12-09 14:19:44.585912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.832 [2024-12-09 14:19:44.585919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:42.832 [2024-12-09 14:19:44.585926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:42.832 [2024-12-09 14:19:44.585932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.832 [2024-12-09 14:19:44.585938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:42.832 [2024-12-09 14:19:44.585945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:42.832 [2024-12-09 14:19:44.585951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.832 [2024-12-09 14:19:44.585958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:42.832 [2024-12-09 14:19:44.585965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:42.832 [2024-12-09 14:19:44.585971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.832 [2024-12-09 14:19:44.585978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:42.832 [2024-12-09 14:19:44.585985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:42.832 [2024-12-09 14:19:44.585991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.832 [2024-12-09 14:19:44.585997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:42.832 [2024-12-09 14:19:44.586003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:42.832 [2024-12-09 14:19:44.586009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.832 [2024-12-09 14:19:44.586016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:42.832 [2024-12-09 14:19:44.586022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:42.832 [2024-12-09 14:19:44.586028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.832 [2024-12-09 14:19:44.586034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:42.832 [2024-12-09 14:19:44.586041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:42.832 [2024-12-09 14:19:44.586048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.832 [2024-12-09 14:19:44.586054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:42.832 [2024-12-09 14:19:44.586061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:42.832 [2024-12-09 14:19:44.586067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.832 [2024-12-09 14:19:44.586073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:42.832 [2024-12-09 14:19:44.586080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:42.832 [2024-12-09 14:19:44.586086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.832 [2024-12-09 14:19:44.586093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:42.832 [2024-12-09 14:19:44.586099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:42.832 [2024-12-09 14:19:44.586105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.832 [2024-12-09 14:19:44.586111] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:42.833 [2024-12-09 14:19:44.586121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:42.833 [2024-12-09 14:19:44.586128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.833 [2024-12-09 14:19:44.586135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.833 [2024-12-09 14:19:44.586142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:42.833 [2024-12-09 14:19:44.586148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:42.833 [2024-12-09 14:19:44.586156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:42.833 [2024-12-09 14:19:44.586162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:42.833 [2024-12-09 14:19:44.586169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:42.833 [2024-12-09 14:19:44.586175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:42.833 [2024-12-09 14:19:44.586184] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:42.833 [2024-12-09 14:19:44.586193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.833 [2024-12-09 14:19:44.586202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:42.833 [2024-12-09 14:19:44.586209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:42.833 [2024-12-09 14:19:44.586216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:42.833 [2024-12-09 14:19:44.586223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:42.833 [2024-12-09 14:19:44.586229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:42.833 [2024-12-09 14:19:44.586236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:42.833 [2024-12-09 14:19:44.586243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:42.833 [2024-12-09 14:19:44.586249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:42.833 [2024-12-09 14:19:44.586256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:42.833 [2024-12-09 14:19:44.586263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:42.833 [2024-12-09 14:19:44.586270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:42.833 [2024-12-09 14:19:44.586277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:42.833 [2024-12-09 14:19:44.586284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:42.833 [2024-12-09 14:19:44.586291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:42.833 [2024-12-09 14:19:44.586298] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:42.833 [2024-12-09 14:19:44.586305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.833 [2024-12-09 14:19:44.586313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:42.833 [2024-12-09 14:19:44.586320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:42.833 [2024-12-09 14:19:44.586327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:42.833 [2024-12-09 14:19:44.586334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:42.833 [2024-12-09 14:19:44.586341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.833 [2024-12-09 14:19:44.586348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:42.833 [2024-12-09 14:19:44.586356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:26:42.833 [2024-12-09 14:19:44.586365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.833 [2024-12-09 14:19:44.612751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.833 [2024-12-09 14:19:44.612784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.833 [2024-12-09 14:19:44.612794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.337 ms 00:26:42.833 [2024-12-09 14:19:44.612805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.833 [2024-12-09 14:19:44.612888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.833 [2024-12-09 14:19:44.612896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:42.833 [2024-12-09 14:19:44.612904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:42.833 [2024-12-09 14:19:44.612911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.652116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.652153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:43.095 [2024-12-09 14:19:44.652165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.154 ms 00:26:43.095 [2024-12-09 14:19:44.652172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.652212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.652223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:43.095 [2024-12-09 14:19:44.652231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:43.095 [2024-12-09 14:19:44.652238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.652623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.652645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:43.095 [2024-12-09 14:19:44.652660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:26:43.095 [2024-12-09 14:19:44.652668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.652791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.652800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:43.095 [2024-12-09 14:19:44.652808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:26:43.095 [2024-12-09 14:19:44.652815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.666187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.666215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:43.095 [2024-12-09 14:19:44.666225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.351 ms 00:26:43.095 [2024-12-09 14:19:44.666232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.679236] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:43.095 [2024-12-09 14:19:44.679269] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:43.095 [2024-12-09 14:19:44.679281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.679289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:43.095 [2024-12-09 14:19:44.679298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.946 ms 00:26:43.095 [2024-12-09 14:19:44.679304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.704026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.704062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:43.095 [2024-12-09 14:19:44.704072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.680 ms 00:26:43.095 [2024-12-09 14:19:44.704080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.716321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.716352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:43.095 [2024-12-09 14:19:44.716362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.196 ms 00:26:43.095 [2024-12-09 14:19:44.716369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.728258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.728287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:43.095 [2024-12-09 14:19:44.728297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.853 ms 00:26:43.095 [2024-12-09 14:19:44.728304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.728953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.728977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:43.095 [2024-12-09 14:19:44.728986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:26:43.095 [2024-12-09 14:19:44.728994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.786686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.786882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:43.095 [2024-12-09 14:19:44.786902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.674 ms 00:26:43.095 [2024-12-09 14:19:44.786911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.797483] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:43.095 [2024-12-09 14:19:44.799911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.799945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:43.095 [2024-12-09 14:19:44.799961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.962 ms 00:26:43.095 [2024-12-09 14:19:44.799969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.800058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.800069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:43.095 [2024-12-09 14:19:44.800078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:43.095 [2024-12-09 14:19:44.800086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.800148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.800158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:43.095 [2024-12-09 14:19:44.800167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:43.095 [2024-12-09 14:19:44.800178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.800197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.800206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:43.095 [2024-12-09 14:19:44.800214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:43.095 [2024-12-09 14:19:44.800222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.800252] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:43.095 [2024-12-09 14:19:44.800264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.800271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:43.095 [2024-12-09 14:19:44.800282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:43.095 [2024-12-09 14:19:44.800290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.095 [2024-12-09 14:19:44.824965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.095 [2024-12-09 14:19:44.825004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:43.095 [2024-12-09 14:19:44.825017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.656 ms 00:26:43.095 [2024-12-09 14:19:44.825026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.096 [2024-12-09 14:19:44.825116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.096 [2024-12-09 14:19:44.825126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:43.096 [2024-12-09 14:19:44.825135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:43.096 [2024-12-09 14:19:44.825148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.096 [2024-12-09 14:19:44.826204] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.458 ms, result 0 00:26:44.482  [2024-12-09T14:19:46.849Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-09T14:19:48.232Z] Copying: 22/1024 [MB] (12 MBps) [2024-12-09T14:19:49.175Z] Copying: 41/1024 [MB] (18 MBps) [2024-12-09T14:19:50.114Z] Copying: 64/1024 [MB] (23 MBps) [2024-12-09T14:19:51.057Z] Copying: 95/1024 [MB] (30 MBps) [2024-12-09T14:19:51.998Z] Copying: 118/1024 [MB] (23 MBps) [2024-12-09T14:19:52.940Z] Copying: 135/1024 [MB] (17 MBps) [2024-12-09T14:19:53.883Z] Copying: 154/1024 [MB] (18 MBps) [2024-12-09T14:19:55.267Z] Copying: 176/1024 [MB] (22 MBps) [2024-12-09T14:19:55.840Z] Copying: 196/1024 [MB] (20 MBps) [2024-12-09T14:19:57.227Z] Copying: 216/1024 [MB] (19 MBps) [2024-12-09T14:19:58.172Z] Copying: 238/1024 [MB] (21 MBps) [2024-12-09T14:19:59.115Z] Copying: 259/1024 [MB] (21 MBps) [2024-12-09T14:20:00.064Z] Copying: 271/1024 [MB] (11 MBps) [2024-12-09T14:20:01.007Z] Copying: 287828/1048576 [kB] (10112 kBps) [2024-12-09T14:20:01.951Z] Copying: 297936/1048576 [kB] (10108 kBps) [2024-12-09T14:20:02.893Z] Copying: 306/1024 [MB] (15 MBps) [2024-12-09T14:20:03.864Z] Copying: 317/1024 [MB] (11 MBps) [2024-12-09T14:20:05.270Z] Copying: 331/1024 [MB] (14 MBps) [2024-12-09T14:20:05.843Z] Copying: 346/1024 [MB] (15 MBps) [2024-12-09T14:20:07.230Z] Copying: 364/1024 [MB] (17 MBps) [2024-12-09T14:20:08.171Z] Copying: 380/1024 [MB] (16 MBps) [2024-12-09T14:20:09.113Z] Copying: 396/1024 [MB] (15 MBps) [2024-12-09T14:20:10.055Z] Copying: 413/1024 [MB] (17 MBps) [2024-12-09T14:20:10.992Z] Copying: 431/1024 [MB] (17 MBps) [2024-12-09T14:20:11.935Z] Copying: 456/1024 [MB] (25 MBps) [2024-12-09T14:20:12.880Z] Copying: 482/1024 [MB] (25 MBps) [2024-12-09T14:20:14.265Z] Copying: 499/1024 [MB] (17 MBps) [2024-12-09T14:20:15.210Z] Copying: 520/1024 [MB] (20 MBps) [2024-12-09T14:20:16.144Z] Copying: 539/1024 [MB] (19 MBps) [2024-12-09T14:20:17.084Z] Copying: 571/1024 [MB] (31 MBps) [2024-12-09T14:20:18.028Z] Copying: 603/1024 [MB] (32 MBps) [2024-12-09T14:20:18.969Z] Copying: 623/1024 [MB] (20 MBps) [2024-12-09T14:20:19.913Z] Copying: 643/1024 [MB] (20 MBps) [2024-12-09T14:20:20.856Z] Copying: 668/1024 [MB] (24 MBps) [2024-12-09T14:20:22.241Z] Copying: 685/1024 [MB] (17 MBps) [2024-12-09T14:20:23.185Z] Copying: 703/1024 [MB] (17 MBps) [2024-12-09T14:20:24.128Z] Copying: 722/1024 [MB] (18 MBps) [2024-12-09T14:20:25.068Z] Copying: 735/1024 [MB] (13 MBps) [2024-12-09T14:20:26.011Z] Copying: 753/1024 [MB] (17 MBps) [2024-12-09T14:20:26.954Z] Copying: 772/1024 [MB] (18 MBps) [2024-12-09T14:20:27.899Z] Copying: 783/1024 [MB] (11 MBps) [2024-12-09T14:20:28.841Z] Copying: 812392/1048576 [kB] (10240 kBps) [2024-12-09T14:20:30.239Z] Copying: 822352/1048576 [kB] (9960 kBps) [2024-12-09T14:20:31.183Z] Copying: 820/1024 [MB] (17 MBps) [2024-12-09T14:20:32.126Z] Copying: 834/1024 [MB] (13 MBps) [2024-12-09T14:20:33.066Z] Copying: 851/1024 [MB] (17 MBps) [2024-12-09T14:20:34.010Z] Copying: 868/1024 [MB] (16 MBps) [2024-12-09T14:20:34.954Z] Copying: 879/1024 [MB] (11 MBps) [2024-12-09T14:20:35.898Z] Copying: 896/1024 [MB] (16 MBps) [2024-12-09T14:20:36.843Z] Copying: 906/1024 [MB] (10 MBps) [2024-12-09T14:20:38.229Z] Copying: 920/1024 [MB] (13 MBps) [2024-12-09T14:20:39.172Z] Copying: 935/1024 [MB] (14 MBps) [2024-12-09T14:20:40.129Z] Copying: 946/1024 [MB] (11 MBps) [2024-12-09T14:20:41.073Z] Copying: 956/1024 [MB] (10 MBps) [2024-12-09T14:20:42.018Z] Copying: 968/1024 [MB] (11 MBps) [2024-12-09T14:20:42.962Z] Copying: 979/1024 [MB] (11 MBps) [2024-12-09T14:20:43.906Z] Copying: 990/1024 [MB] (10 MBps) [2024-12-09T14:20:44.865Z] Copying: 1002/1024 [MB] (12 MBps) [2024-12-09T14:20:46.249Z] Copying: 1017/1024 [MB] (14 MBps) [2024-12-09T14:20:46.250Z] Copying: 1048340/1048576 [kB] (6684 kBps) [2024-12-09T14:20:46.250Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-09 14:20:46.091106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.456 [2024-12-09 14:20:46.091165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:44.456 [2024-12-09 14:20:46.091189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:44.456 [2024-12-09 14:20:46.091203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.456 [2024-12-09 14:20:46.091338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:44.456 [2024-12-09 14:20:46.098360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.456 [2024-12-09 14:20:46.098499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:44.456 [2024-12-09 14:20:46.098524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.995 ms 00:27:44.456 [2024-12-09 14:20:46.098546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.456 [2024-12-09 14:20:46.108135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.456 [2024-12-09 14:20:46.108180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:44.456 [2024-12-09 14:20:46.108193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.446 ms 00:27:44.456 [2024-12-09 14:20:46.108201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.456 [2024-12-09 14:20:46.135059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.456 [2024-12-09 14:20:46.135100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:44.456 [2024-12-09 14:20:46.135113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.841 ms 00:27:44.456 [2024-12-09 14:20:46.135121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.456 [2024-12-09 14:20:46.141257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.456 [2024-12-09 14:20:46.141378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:44.456 [2024-12-09 14:20:46.141394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.101 ms 00:27:44.456 [2024-12-09 14:20:46.141402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.456 [2024-12-09 14:20:46.165246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.456 [2024-12-09 14:20:46.165276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:44.456 [2024-12-09 14:20:46.165286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.806 ms 00:27:44.456 [2024-12-09 14:20:46.165294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.456 [2024-12-09 14:20:46.179723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.456 [2024-12-09 14:20:46.179841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:44.456 [2024-12-09 14:20:46.179857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.399 ms 00:27:44.456 [2024-12-09 14:20:46.179865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.718 [2024-12-09 14:20:46.462433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.718 [2024-12-09 14:20:46.462489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:44.718 [2024-12-09 14:20:46.462500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 282.535 ms 00:27:44.718 [2024-12-09 14:20:46.462507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.718 [2024-12-09 14:20:46.486548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.718 [2024-12-09 14:20:46.486581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:44.718 [2024-12-09 14:20:46.486591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.026 ms 00:27:44.718 [2024-12-09 14:20:46.486608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.718 [2024-12-09 14:20:46.509646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.981 [2024-12-09 14:20:46.509784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:44.981 [2024-12-09 14:20:46.509800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.005 ms 00:27:44.981 [2024-12-09 14:20:46.509807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.981 [2024-12-09 14:20:46.532750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.981 [2024-12-09 14:20:46.532871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:44.981 [2024-12-09 14:20:46.532887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.914 ms 00:27:44.981 [2024-12-09 14:20:46.532894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.981 [2024-12-09 14:20:46.556011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.981 [2024-12-09 14:20:46.556137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:44.981 [2024-12-09 14:20:46.556153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.065 ms 00:27:44.981 [2024-12-09 14:20:46.556161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.981 [2024-12-09 14:20:46.556189] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:44.981 [2024-12-09 14:20:46.556203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104448 / 261120 wr_cnt: 1 state: open 00:27:44.981 [2024-12-09 14:20:46.556214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:44.981 [2024-12-09 14:20:46.556788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:44.982 [2024-12-09 14:20:46.556991] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:44.982 [2024-12-09 14:20:46.557002] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a702aa5-c8aa-46c4-9572-8d77a33e3e76 00:27:44.982 [2024-12-09 14:20:46.557016] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104448 00:27:44.982 [2024-12-09 14:20:46.557024] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105408 00:27:44.982 [2024-12-09 14:20:46.557032] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104448 00:27:44.982 [2024-12-09 14:20:46.557040] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:27:44.982 [2024-12-09 14:20:46.557047] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:44.982 [2024-12-09 14:20:46.557055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:44.982 [2024-12-09 14:20:46.557062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:44.982 [2024-12-09 14:20:46.557069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:44.982 [2024-12-09 14:20:46.557076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:44.982 [2024-12-09 14:20:46.557084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.982 [2024-12-09 14:20:46.557110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:44.982 [2024-12-09 14:20:46.557120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:27:44.982 [2024-12-09 14:20:46.557127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.569723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.982 [2024-12-09 14:20:46.569753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:44.982 [2024-12-09 14:20:46.569764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.577 ms 00:27:44.982 [2024-12-09 14:20:46.569773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.570133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:44.982 [2024-12-09 14:20:46.570142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:44.982 [2024-12-09 14:20:46.570162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:27:44.982 [2024-12-09 14:20:46.570170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.604229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.604272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:44.982 [2024-12-09 14:20:46.604282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.604290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.604349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.604357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:44.982 [2024-12-09 14:20:46.604369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.604376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.604450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.604460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:44.982 [2024-12-09 14:20:46.604468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.604476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.604491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.604499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:44.982 [2024-12-09 14:20:46.604507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.604517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.685991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.686052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:44.982 [2024-12-09 14:20:46.686066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.686075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.754995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.755201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:44.982 [2024-12-09 14:20:46.755229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.755238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.755307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.755318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:44.982 [2024-12-09 14:20:46.755327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.755335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.755392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.755403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:44.982 [2024-12-09 14:20:46.755411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.755420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.755534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.755580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:44.982 [2024-12-09 14:20:46.755589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.755598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.755642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.755652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:44.982 [2024-12-09 14:20:46.755661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.755669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.755718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.755728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:44.982 [2024-12-09 14:20:46.755737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.755745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.755794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:44.982 [2024-12-09 14:20:46.755805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:44.982 [2024-12-09 14:20:46.755814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:44.982 [2024-12-09 14:20:46.755822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:44.982 [2024-12-09 14:20:46.755963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 666.319 ms, result 0 00:27:46.369 00:27:46.369 00:27:46.369 14:20:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:48.911 14:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:48.911 [2024-12-09 14:20:50.401741] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:27:48.911 [2024-12-09 14:20:50.402081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81508 ] 00:27:48.911 [2024-12-09 14:20:50.567587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.911 [2024-12-09 14:20:50.691031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.485 [2024-12-09 14:20:50.992055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:49.485 [2024-12-09 14:20:50.992143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:49.485 [2024-12-09 14:20:51.156092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.485 [2024-12-09 14:20:51.156158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:49.485 [2024-12-09 14:20:51.156174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:49.485 [2024-12-09 14:20:51.156183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.485 [2024-12-09 14:20:51.156240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.485 [2024-12-09 14:20:51.156254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:49.485 [2024-12-09 14:20:51.156263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:49.485 [2024-12-09 14:20:51.156271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.485 [2024-12-09 14:20:51.156292] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:49.485 [2024-12-09 14:20:51.157388] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:49.485 [2024-12-09 14:20:51.157451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.485 [2024-12-09 14:20:51.157462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:49.485 [2024-12-09 14:20:51.157474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:27:49.485 [2024-12-09 14:20:51.157482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.486 [2024-12-09 14:20:51.159220] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:49.486 [2024-12-09 14:20:51.173531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.486 [2024-12-09 14:20:51.173588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:49.486 [2024-12-09 14:20:51.173602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.313 ms 00:27:49.486 [2024-12-09 14:20:51.173610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.486 [2024-12-09 14:20:51.173694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.486 [2024-12-09 14:20:51.173705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:49.486 [2024-12-09 14:20:51.173714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:49.486 [2024-12-09 14:20:51.173721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.486 [2024-12-09 14:20:51.181880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.486 [2024-12-09 14:20:51.181925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:49.486 [2024-12-09 14:20:51.181937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.080 ms 00:27:49.486 [2024-12-09 14:20:51.181952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.486 [2024-12-09 14:20:51.182036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.486 [2024-12-09 14:20:51.182045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:49.486 [2024-12-09 14:20:51.182054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:49.486 [2024-12-09 14:20:51.182061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.486 [2024-12-09 14:20:51.182106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.486 [2024-12-09 14:20:51.182116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:49.486 [2024-12-09 14:20:51.182124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:49.486 [2024-12-09 14:20:51.182132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.486 [2024-12-09 14:20:51.182159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:49.486 [2024-12-09 14:20:51.186310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.486 [2024-12-09 14:20:51.186348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:49.486 [2024-12-09 14:20:51.186362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.157 ms 00:27:49.486 [2024-12-09 14:20:51.186370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.486 [2024-12-09 14:20:51.186409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.486 [2024-12-09 14:20:51.186418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:49.486 [2024-12-09 14:20:51.186426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:49.486 [2024-12-09 14:20:51.186434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.486 [2024-12-09 14:20:51.186486] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:49.486 [2024-12-09 14:20:51.186511] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:49.486 [2024-12-09 14:20:51.186573] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:49.486 [2024-12-09 14:20:51.186601] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:49.486 [2024-12-09 14:20:51.186712] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:49.486 [2024-12-09 14:20:51.186724] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:49.486 [2024-12-09 14:20:51.186736] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:49.486 [2024-12-09 14:20:51.186747] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:49.486 [2024-12-09 14:20:51.186756] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:49.486 [2024-12-09 14:20:51.186764] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:49.486 [2024-12-09 14:20:51.186772] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:49.486 [2024-12-09 14:20:51.186783] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:49.486 [2024-12-09 14:20:51.186791] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:49.486 [2024-12-09 14:20:51.186799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.486 [2024-12-09 14:20:51.186807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:49.486 [2024-12-09 14:20:51.186815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:27:49.486 [2024-12-09 14:20:51.186823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.486 [2024-12-09 14:20:51.186907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.486 [2024-12-09 14:20:51.186916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:49.486 [2024-12-09 14:20:51.186924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:49.486 [2024-12-09 14:20:51.186932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.486 [2024-12-09 14:20:51.187039] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:49.486 [2024-12-09 14:20:51.187051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:49.486 [2024-12-09 14:20:51.187059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:49.486 [2024-12-09 14:20:51.187067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:49.486 [2024-12-09 14:20:51.187082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:49.486 [2024-12-09 14:20:51.187095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:49.486 [2024-12-09 14:20:51.187103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:49.486 [2024-12-09 14:20:51.187116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:49.486 [2024-12-09 14:20:51.187123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:49.486 [2024-12-09 14:20:51.187129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:49.486 [2024-12-09 14:20:51.187143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:49.486 [2024-12-09 14:20:51.187151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:49.486 [2024-12-09 14:20:51.187160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:49.486 [2024-12-09 14:20:51.187175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:49.486 [2024-12-09 14:20:51.187181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:49.486 [2024-12-09 14:20:51.187198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.486 [2024-12-09 14:20:51.187212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:49.486 [2024-12-09 14:20:51.187218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.486 [2024-12-09 14:20:51.187232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:49.486 [2024-12-09 14:20:51.187239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.486 [2024-12-09 14:20:51.187253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:49.486 [2024-12-09 14:20:51.187260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.486 [2024-12-09 14:20:51.187273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:49.486 [2024-12-09 14:20:51.187281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:49.486 [2024-12-09 14:20:51.187294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:49.486 [2024-12-09 14:20:51.187301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:49.486 [2024-12-09 14:20:51.187307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:49.486 [2024-12-09 14:20:51.187315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:49.486 [2024-12-09 14:20:51.187322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:49.486 [2024-12-09 14:20:51.187329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:49.486 [2024-12-09 14:20:51.187341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:49.486 [2024-12-09 14:20:51.187349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187356] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:49.486 [2024-12-09 14:20:51.187364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:49.486 [2024-12-09 14:20:51.187371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:49.486 [2024-12-09 14:20:51.187378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.486 [2024-12-09 14:20:51.187388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:49.486 [2024-12-09 14:20:51.187395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:49.486 [2024-12-09 14:20:51.187401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:49.486 [2024-12-09 14:20:51.187408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:49.486 [2024-12-09 14:20:51.187415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:49.486 [2024-12-09 14:20:51.187422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:49.487 [2024-12-09 14:20:51.187431] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:49.487 [2024-12-09 14:20:51.187441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.487 [2024-12-09 14:20:51.187453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:49.487 [2024-12-09 14:20:51.187460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:49.487 [2024-12-09 14:20:51.187468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:49.487 [2024-12-09 14:20:51.187476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:49.487 [2024-12-09 14:20:51.187483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:49.487 [2024-12-09 14:20:51.187491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:49.487 [2024-12-09 14:20:51.187498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:49.487 [2024-12-09 14:20:51.187506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:49.487 [2024-12-09 14:20:51.187513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:49.487 [2024-12-09 14:20:51.187520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:49.487 [2024-12-09 14:20:51.187527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:49.487 [2024-12-09 14:20:51.187547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:49.487 [2024-12-09 14:20:51.187583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:49.487 [2024-12-09 14:20:51.187590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:49.487 [2024-12-09 14:20:51.187598] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:49.487 [2024-12-09 14:20:51.187607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.487 [2024-12-09 14:20:51.187616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:49.487 [2024-12-09 14:20:51.187624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:49.487 [2024-12-09 14:20:51.187631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:49.487 [2024-12-09 14:20:51.187638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:49.487 [2024-12-09 14:20:51.187647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.487 [2024-12-09 14:20:51.187655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:49.487 [2024-12-09 14:20:51.187664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:27:49.487 [2024-12-09 14:20:51.187671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.487 [2024-12-09 14:20:51.219729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.487 [2024-12-09 14:20:51.219941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:49.487 [2024-12-09 14:20:51.219961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.008 ms 00:27:49.487 [2024-12-09 14:20:51.219977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.487 [2024-12-09 14:20:51.220070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.487 [2024-12-09 14:20:51.220080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:49.487 [2024-12-09 14:20:51.220089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:49.487 [2024-12-09 14:20:51.220096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.487 [2024-12-09 14:20:51.267843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.487 [2024-12-09 14:20:51.267899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:49.487 [2024-12-09 14:20:51.267913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.687 ms 00:27:49.487 [2024-12-09 14:20:51.267922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.487 [2024-12-09 14:20:51.267972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.487 [2024-12-09 14:20:51.267983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:49.487 [2024-12-09 14:20:51.267996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:49.487 [2024-12-09 14:20:51.268004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.487 [2024-12-09 14:20:51.268620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.487 [2024-12-09 14:20:51.268649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:49.487 [2024-12-09 14:20:51.268659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:27:49.487 [2024-12-09 14:20:51.268667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.487 [2024-12-09 14:20:51.268823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.487 [2024-12-09 14:20:51.268882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:49.487 [2024-12-09 14:20:51.268902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:27:49.487 [2024-12-09 14:20:51.268910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.748 [2024-12-09 14:20:51.284892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.284938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:49.749 [2024-12-09 14:20:51.284949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.959 ms 00:27:49.749 [2024-12-09 14:20:51.284957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.299465] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:49.749 [2024-12-09 14:20:51.299514] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:49.749 [2024-12-09 14:20:51.299529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.299559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:49.749 [2024-12-09 14:20:51.299573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.457 ms 00:27:49.749 [2024-12-09 14:20:51.299596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.325382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.325431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:49.749 [2024-12-09 14:20:51.325445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.725 ms 00:27:49.749 [2024-12-09 14:20:51.325453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.338513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.338700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:49.749 [2024-12-09 14:20:51.338720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.006 ms 00:27:49.749 [2024-12-09 14:20:51.338728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.351769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.351820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:49.749 [2024-12-09 14:20:51.351833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.929 ms 00:27:49.749 [2024-12-09 14:20:51.351841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.352520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.352569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:49.749 [2024-12-09 14:20:51.352592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:27:49.749 [2024-12-09 14:20:51.352600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.417883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.418104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:49.749 [2024-12-09 14:20:51.418138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.258 ms 00:27:49.749 [2024-12-09 14:20:51.418148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.429514] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:49.749 [2024-12-09 14:20:51.432633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.432794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:49.749 [2024-12-09 14:20:51.432814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.435 ms 00:27:49.749 [2024-12-09 14:20:51.432823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.432925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.432938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:49.749 [2024-12-09 14:20:51.432951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:49.749 [2024-12-09 14:20:51.432959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.434742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.434792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:49.749 [2024-12-09 14:20:51.434803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:27:49.749 [2024-12-09 14:20:51.434813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.434846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.434856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:49.749 [2024-12-09 14:20:51.434866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:49.749 [2024-12-09 14:20:51.434874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.434923] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:49.749 [2024-12-09 14:20:51.434935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.434945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:49.749 [2024-12-09 14:20:51.434955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:49.749 [2024-12-09 14:20:51.434964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.461033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.461083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:49.749 [2024-12-09 14:20:51.461125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.048 ms 00:27:49.749 [2024-12-09 14:20:51.461133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.461218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.749 [2024-12-09 14:20:51.461229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:49.749 [2024-12-09 14:20:51.461239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:49.749 [2024-12-09 14:20:51.461248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.749 [2024-12-09 14:20:51.462505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.927 ms, result 0 00:27:51.187  [2024-12-09T14:20:53.921Z] Copying: 1112/1048576 [kB] (1112 kBps) [2024-12-09T14:20:54.863Z] Copying: 4352/1048576 [kB] (3240 kBps) [2024-12-09T14:20:55.806Z] Copying: 14016/1048576 [kB] (9664 kBps) [2024-12-09T14:20:56.750Z] Copying: 29/1024 [MB] (16 MBps) [2024-12-09T14:20:57.693Z] Copying: 46/1024 [MB] (16 MBps) [2024-12-09T14:20:59.077Z] Copying: 63/1024 [MB] (17 MBps) [2024-12-09T14:20:59.650Z] Copying: 81/1024 [MB] (17 MBps) [2024-12-09T14:21:01.034Z] Copying: 98/1024 [MB] (17 MBps) [2024-12-09T14:21:01.978Z] Copying: 113/1024 [MB] (15 MBps) [2024-12-09T14:21:02.923Z] Copying: 136/1024 [MB] (22 MBps) [2024-12-09T14:21:03.865Z] Copying: 152/1024 [MB] (15 MBps) [2024-12-09T14:21:04.807Z] Copying: 173/1024 [MB] (20 MBps) [2024-12-09T14:21:05.794Z] Copying: 200/1024 [MB] (27 MBps) [2024-12-09T14:21:06.732Z] Copying: 230/1024 [MB] (29 MBps) [2024-12-09T14:21:07.672Z] Copying: 249/1024 [MB] (19 MBps) [2024-12-09T14:21:09.056Z] Copying: 276/1024 [MB] (26 MBps) [2024-12-09T14:21:10.001Z] Copying: 306/1024 [MB] (29 MBps) [2024-12-09T14:21:10.944Z] Copying: 329/1024 [MB] (22 MBps) [2024-12-09T14:21:11.891Z] Copying: 345/1024 [MB] (16 MBps) [2024-12-09T14:21:12.835Z] Copying: 368/1024 [MB] (22 MBps) [2024-12-09T14:21:13.780Z] Copying: 393/1024 [MB] (25 MBps) [2024-12-09T14:21:14.723Z] Copying: 425/1024 [MB] (31 MBps) [2024-12-09T14:21:15.663Z] Copying: 452/1024 [MB] (27 MBps) [2024-12-09T14:21:17.049Z] Copying: 471/1024 [MB] (19 MBps) [2024-12-09T14:21:17.998Z] Copying: 493/1024 [MB] (22 MBps) [2024-12-09T14:21:18.942Z] Copying: 517/1024 [MB] (23 MBps) [2024-12-09T14:21:19.886Z] Copying: 532/1024 [MB] (15 MBps) [2024-12-09T14:21:20.828Z] Copying: 548/1024 [MB] (15 MBps) [2024-12-09T14:21:21.770Z] Copying: 564/1024 [MB] (15 MBps) [2024-12-09T14:21:22.784Z] Copying: 579/1024 [MB] (15 MBps) [2024-12-09T14:21:23.727Z] Copying: 595/1024 [MB] (15 MBps) [2024-12-09T14:21:24.671Z] Copying: 611/1024 [MB] (15 MBps) [2024-12-09T14:21:26.058Z] Copying: 627/1024 [MB] (16 MBps) [2024-12-09T14:21:27.001Z] Copying: 643/1024 [MB] (15 MBps) [2024-12-09T14:21:27.943Z] Copying: 658/1024 [MB] (15 MBps) [2024-12-09T14:21:28.887Z] Copying: 674/1024 [MB] (15 MBps) [2024-12-09T14:21:29.832Z] Copying: 689/1024 [MB] (15 MBps) [2024-12-09T14:21:30.774Z] Copying: 704/1024 [MB] (15 MBps) [2024-12-09T14:21:31.719Z] Copying: 719/1024 [MB] (14 MBps) [2024-12-09T14:21:32.661Z] Copying: 734/1024 [MB] (14 MBps) [2024-12-09T14:21:34.046Z] Copying: 750/1024 [MB] (15 MBps) [2024-12-09T14:21:34.988Z] Copying: 765/1024 [MB] (15 MBps) [2024-12-09T14:21:35.938Z] Copying: 780/1024 [MB] (14 MBps) [2024-12-09T14:21:36.880Z] Copying: 795/1024 [MB] (15 MBps) [2024-12-09T14:21:37.846Z] Copying: 810/1024 [MB] (15 MBps) [2024-12-09T14:21:38.791Z] Copying: 825/1024 [MB] (14 MBps) [2024-12-09T14:21:39.734Z] Copying: 841/1024 [MB] (15 MBps) [2024-12-09T14:21:40.675Z] Copying: 860/1024 [MB] (19 MBps) [2024-12-09T14:21:42.061Z] Copying: 892/1024 [MB] (32 MBps) [2024-12-09T14:21:43.005Z] Copying: 914/1024 [MB] (22 MBps) [2024-12-09T14:21:43.948Z] Copying: 931/1024 [MB] (17 MBps) [2024-12-09T14:21:44.891Z] Copying: 961/1024 [MB] (29 MBps) [2024-12-09T14:21:45.835Z] Copying: 991/1024 [MB] (29 MBps) [2024-12-09T14:21:46.407Z] Copying: 1011/1024 [MB] (20 MBps) [2024-12-09T14:21:46.407Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-09 14:21:46.390555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.613 [2024-12-09 14:21:46.390816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:44.613 [2024-12-09 14:21:46.390906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:44.613 [2024-12-09 14:21:46.390936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.613 [2024-12-09 14:21:46.390991] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:44.613 [2024-12-09 14:21:46.394649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.613 [2024-12-09 14:21:46.394825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:44.613 [2024-12-09 14:21:46.394905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:28:44.613 [2024-12-09 14:21:46.395025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.613 [2024-12-09 14:21:46.395347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.613 [2024-12-09 14:21:46.395454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:44.613 [2024-12-09 14:21:46.395521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:28:44.613 [2024-12-09 14:21:46.395557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.875 [2024-12-09 14:21:46.409168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.875 [2024-12-09 14:21:46.409227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:44.875 [2024-12-09 14:21:46.409242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.584 ms 00:28:44.875 [2024-12-09 14:21:46.409253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.875 [2024-12-09 14:21:46.415620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.875 [2024-12-09 14:21:46.415765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:44.875 [2024-12-09 14:21:46.416173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.325 ms 00:28:44.875 [2024-12-09 14:21:46.416226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.875 [2024-12-09 14:21:46.442882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.875 [2024-12-09 14:21:46.443069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:44.875 [2024-12-09 14:21:46.443134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.556 ms 00:28:44.875 [2024-12-09 14:21:46.443158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.875 [2024-12-09 14:21:46.458505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.875 [2024-12-09 14:21:46.458677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:44.875 [2024-12-09 14:21:46.458740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.298 ms 00:28:44.875 [2024-12-09 14:21:46.458763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.875 [2024-12-09 14:21:46.463379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.876 [2024-12-09 14:21:46.463519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:44.876 [2024-12-09 14:21:46.463602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.485 ms 00:28:44.876 [2024-12-09 14:21:46.463655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.876 [2024-12-09 14:21:46.489295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.876 [2024-12-09 14:21:46.489444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:44.876 [2024-12-09 14:21:46.489500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.602 ms 00:28:44.876 [2024-12-09 14:21:46.489522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.876 [2024-12-09 14:21:46.514732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.876 [2024-12-09 14:21:46.514924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:44.876 [2024-12-09 14:21:46.515089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.144 ms 00:28:44.876 [2024-12-09 14:21:46.515128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.876 [2024-12-09 14:21:46.540049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.876 [2024-12-09 14:21:46.540096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:44.876 [2024-12-09 14:21:46.540110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.438 ms 00:28:44.876 [2024-12-09 14:21:46.540118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.876 [2024-12-09 14:21:46.564795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.876 [2024-12-09 14:21:46.564842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:44.876 [2024-12-09 14:21:46.564853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.604 ms 00:28:44.876 [2024-12-09 14:21:46.564861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.876 [2024-12-09 14:21:46.564903] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:44.876 [2024-12-09 14:21:46.564921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:44.876 [2024-12-09 14:21:46.564933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:44.876 [2024-12-09 14:21:46.564941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.564950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.564958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.564966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.564974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.564983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.564990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.564999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:44.876 [2024-12-09 14:21:46.565482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:44.877 [2024-12-09 14:21:46.565801] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:44.877 [2024-12-09 14:21:46.565809] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a702aa5-c8aa-46c4-9572-8d77a33e3e76 00:28:44.877 [2024-12-09 14:21:46.565817] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:44.877 [2024-12-09 14:21:46.565824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 160192 00:28:44.877 [2024-12-09 14:21:46.565838] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 158208 00:28:44.877 [2024-12-09 14:21:46.565847] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:28:44.877 [2024-12-09 14:21:46.565855] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:44.877 [2024-12-09 14:21:46.565872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:44.877 [2024-12-09 14:21:46.565884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:44.877 [2024-12-09 14:21:46.565892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:44.877 [2024-12-09 14:21:46.565898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:44.877 [2024-12-09 14:21:46.565906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.877 [2024-12-09 14:21:46.565915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:44.877 [2024-12-09 14:21:46.565923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:28:44.877 [2024-12-09 14:21:46.565931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.877 [2024-12-09 14:21:46.579780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.877 [2024-12-09 14:21:46.579820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:44.877 [2024-12-09 14:21:46.579832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.828 ms 00:28:44.877 [2024-12-09 14:21:46.579841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.877 [2024-12-09 14:21:46.580237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.877 [2024-12-09 14:21:46.580247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:44.877 [2024-12-09 14:21:46.580257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:28:44.877 [2024-12-09 14:21:46.580264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.877 [2024-12-09 14:21:46.616463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.877 [2024-12-09 14:21:46.616673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:44.877 [2024-12-09 14:21:46.616693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.877 [2024-12-09 14:21:46.616702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.877 [2024-12-09 14:21:46.616766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.877 [2024-12-09 14:21:46.616775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:44.877 [2024-12-09 14:21:46.616784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.877 [2024-12-09 14:21:46.616792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.877 [2024-12-09 14:21:46.616891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.877 [2024-12-09 14:21:46.616903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:44.877 [2024-12-09 14:21:46.616911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.877 [2024-12-09 14:21:46.616919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.877 [2024-12-09 14:21:46.616936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.877 [2024-12-09 14:21:46.616945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:44.877 [2024-12-09 14:21:46.616953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.877 [2024-12-09 14:21:46.616961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.137 [2024-12-09 14:21:46.700600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.137 [2024-12-09 14:21:46.700658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:45.137 [2024-12-09 14:21:46.700672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.137 [2024-12-09 14:21:46.700681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.138 [2024-12-09 14:21:46.768270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.138 [2024-12-09 14:21:46.768478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:45.138 [2024-12-09 14:21:46.768497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.138 [2024-12-09 14:21:46.768506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.138 [2024-12-09 14:21:46.768593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.138 [2024-12-09 14:21:46.768611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:45.138 [2024-12-09 14:21:46.768620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.138 [2024-12-09 14:21:46.768629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.138 [2024-12-09 14:21:46.768688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.138 [2024-12-09 14:21:46.768698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:45.138 [2024-12-09 14:21:46.768707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.138 [2024-12-09 14:21:46.768716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.138 [2024-12-09 14:21:46.768822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.138 [2024-12-09 14:21:46.768833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:45.138 [2024-12-09 14:21:46.768845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.138 [2024-12-09 14:21:46.768854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.138 [2024-12-09 14:21:46.768890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.138 [2024-12-09 14:21:46.768900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:45.138 [2024-12-09 14:21:46.768909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.138 [2024-12-09 14:21:46.768917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.138 [2024-12-09 14:21:46.768958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.138 [2024-12-09 14:21:46.768968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:45.138 [2024-12-09 14:21:46.768980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.138 [2024-12-09 14:21:46.768988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.138 [2024-12-09 14:21:46.769037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.138 [2024-12-09 14:21:46.769048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:45.138 [2024-12-09 14:21:46.769058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.138 [2024-12-09 14:21:46.769066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.138 [2024-12-09 14:21:46.769226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.676 ms, result 0 00:28:45.710 00:28:45.710 00:28:45.972 14:21:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:48.521 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:48.521 14:21:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:48.521 [2024-12-09 14:21:49.821361] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:28:48.521 [2024-12-09 14:21:49.821507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82107 ] 00:28:48.521 [2024-12-09 14:21:49.986233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.521 [2024-12-09 14:21:50.111559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.783 [2024-12-09 14:21:50.407520] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.783 [2024-12-09 14:21:50.407862] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.783 [2024-12-09 14:21:50.569077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.783 [2024-12-09 14:21:50.569171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:48.783 [2024-12-09 14:21:50.569188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:48.783 [2024-12-09 14:21:50.569197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.783 [2024-12-09 14:21:50.569256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.783 [2024-12-09 14:21:50.569271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:48.783 [2024-12-09 14:21:50.569280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:48.783 [2024-12-09 14:21:50.569288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.783 [2024-12-09 14:21:50.569308] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:48.783 [2024-12-09 14:21:50.570092] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:48.783 [2024-12-09 14:21:50.570112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.783 [2024-12-09 14:21:50.570122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:48.783 [2024-12-09 14:21:50.570131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:28:48.783 [2024-12-09 14:21:50.570138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.783 [2024-12-09 14:21:50.571871] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:49.045 [2024-12-09 14:21:50.585871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.045 [2024-12-09 14:21:50.585930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:49.045 [2024-12-09 14:21:50.585944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.002 ms 00:28:49.045 [2024-12-09 14:21:50.585952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.045 [2024-12-09 14:21:50.586033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.045 [2024-12-09 14:21:50.586043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:49.045 [2024-12-09 14:21:50.586053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:49.045 [2024-12-09 14:21:50.586060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.045 [2024-12-09 14:21:50.594025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.045 [2024-12-09 14:21:50.594212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:49.045 [2024-12-09 14:21:50.594231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.889 ms 00:28:49.045 [2024-12-09 14:21:50.594247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.045 [2024-12-09 14:21:50.594332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.045 [2024-12-09 14:21:50.594341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:49.045 [2024-12-09 14:21:50.594350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:49.045 [2024-12-09 14:21:50.594357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.045 [2024-12-09 14:21:50.594400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.045 [2024-12-09 14:21:50.594410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:49.045 [2024-12-09 14:21:50.594419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:49.045 [2024-12-09 14:21:50.594427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.045 [2024-12-09 14:21:50.594453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:49.045 [2024-12-09 14:21:50.598360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.045 [2024-12-09 14:21:50.598397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:49.045 [2024-12-09 14:21:50.598411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.912 ms 00:28:49.045 [2024-12-09 14:21:50.598419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.045 [2024-12-09 14:21:50.598457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.045 [2024-12-09 14:21:50.598466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:49.045 [2024-12-09 14:21:50.598474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:49.045 [2024-12-09 14:21:50.598482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.045 [2024-12-09 14:21:50.598553] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:49.045 [2024-12-09 14:21:50.598579] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:49.045 [2024-12-09 14:21:50.598618] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:49.045 [2024-12-09 14:21:50.598636] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:49.045 [2024-12-09 14:21:50.598743] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:49.045 [2024-12-09 14:21:50.598756] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:49.045 [2024-12-09 14:21:50.598766] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:49.045 [2024-12-09 14:21:50.598777] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:49.045 [2024-12-09 14:21:50.598787] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:49.045 [2024-12-09 14:21:50.598795] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:49.045 [2024-12-09 14:21:50.598803] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:49.045 [2024-12-09 14:21:50.598814] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:49.045 [2024-12-09 14:21:50.598822] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:49.045 [2024-12-09 14:21:50.598830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.045 [2024-12-09 14:21:50.598838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:49.045 [2024-12-09 14:21:50.598846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:28:49.045 [2024-12-09 14:21:50.598853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.045 [2024-12-09 14:21:50.598937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.045 [2024-12-09 14:21:50.598946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:49.045 [2024-12-09 14:21:50.598953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:49.045 [2024-12-09 14:21:50.598960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.045 [2024-12-09 14:21:50.599068] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:49.045 [2024-12-09 14:21:50.599083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:49.045 [2024-12-09 14:21:50.599092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:49.045 [2024-12-09 14:21:50.599100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.045 [2024-12-09 14:21:50.599108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:49.045 [2024-12-09 14:21:50.599115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:49.045 [2024-12-09 14:21:50.599122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:49.045 [2024-12-09 14:21:50.599129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:49.045 [2024-12-09 14:21:50.599137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:49.045 [2024-12-09 14:21:50.599144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:49.045 [2024-12-09 14:21:50.599151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:49.045 [2024-12-09 14:21:50.599157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:49.046 [2024-12-09 14:21:50.599164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:49.046 [2024-12-09 14:21:50.599178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:49.046 [2024-12-09 14:21:50.599187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:49.046 [2024-12-09 14:21:50.599194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.046 [2024-12-09 14:21:50.599201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:49.046 [2024-12-09 14:21:50.599208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:49.046 [2024-12-09 14:21:50.599215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.046 [2024-12-09 14:21:50.599222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:49.046 [2024-12-09 14:21:50.599229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:49.046 [2024-12-09 14:21:50.599235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.046 [2024-12-09 14:21:50.599242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:49.046 [2024-12-09 14:21:50.599249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:49.046 [2024-12-09 14:21:50.599255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.046 [2024-12-09 14:21:50.599262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:49.046 [2024-12-09 14:21:50.599269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:49.046 [2024-12-09 14:21:50.599275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.046 [2024-12-09 14:21:50.599282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:49.046 [2024-12-09 14:21:50.599288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:49.046 [2024-12-09 14:21:50.599295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.046 [2024-12-09 14:21:50.599302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:49.046 [2024-12-09 14:21:50.599309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:49.046 [2024-12-09 14:21:50.599316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:49.046 [2024-12-09 14:21:50.599323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:49.046 [2024-12-09 14:21:50.599330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:49.046 [2024-12-09 14:21:50.599336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:49.046 [2024-12-09 14:21:50.599344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:49.046 [2024-12-09 14:21:50.599350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:49.046 [2024-12-09 14:21:50.599356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.046 [2024-12-09 14:21:50.599362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:49.046 [2024-12-09 14:21:50.599369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:49.046 [2024-12-09 14:21:50.599376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.046 [2024-12-09 14:21:50.599382] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:49.046 [2024-12-09 14:21:50.599390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:49.046 [2024-12-09 14:21:50.599398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:49.046 [2024-12-09 14:21:50.599407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.046 [2024-12-09 14:21:50.599416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:49.046 [2024-12-09 14:21:50.599423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:49.046 [2024-12-09 14:21:50.599430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:49.046 [2024-12-09 14:21:50.599436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:49.046 [2024-12-09 14:21:50.599443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:49.046 [2024-12-09 14:21:50.599449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:49.046 [2024-12-09 14:21:50.599458] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:49.046 [2024-12-09 14:21:50.599468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:49.046 [2024-12-09 14:21:50.599479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:49.046 [2024-12-09 14:21:50.599487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:49.046 [2024-12-09 14:21:50.599494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:49.046 [2024-12-09 14:21:50.599501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:49.046 [2024-12-09 14:21:50.599509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:49.046 [2024-12-09 14:21:50.599516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:49.046 [2024-12-09 14:21:50.599523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:49.046 [2024-12-09 14:21:50.599531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:49.046 [2024-12-09 14:21:50.599557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:49.046 [2024-12-09 14:21:50.599565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:49.046 [2024-12-09 14:21:50.599573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:49.046 [2024-12-09 14:21:50.599581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:49.046 [2024-12-09 14:21:50.599590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:49.046 [2024-12-09 14:21:50.599598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:49.046 [2024-12-09 14:21:50.599606] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:49.046 [2024-12-09 14:21:50.599615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:49.046 [2024-12-09 14:21:50.599624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:49.046 [2024-12-09 14:21:50.599631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:49.046 [2024-12-09 14:21:50.599639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:49.046 [2024-12-09 14:21:50.599646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:49.046 [2024-12-09 14:21:50.599654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.046 [2024-12-09 14:21:50.599663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:49.046 [2024-12-09 14:21:50.599671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:28:49.046 [2024-12-09 14:21:50.599678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.046 [2024-12-09 14:21:50.631398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.046 [2024-12-09 14:21:50.631581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:49.046 [2024-12-09 14:21:50.631979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.670 ms 00:28:49.046 [2024-12-09 14:21:50.632085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.046 [2024-12-09 14:21:50.632207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.046 [2024-12-09 14:21:50.632263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:49.046 [2024-12-09 14:21:50.632289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:49.046 [2024-12-09 14:21:50.632308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.046 [2024-12-09 14:21:50.674621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.046 [2024-12-09 14:21:50.674809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:49.046 [2024-12-09 14:21:50.674876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.224 ms 00:28:49.046 [2024-12-09 14:21:50.674903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.046 [2024-12-09 14:21:50.674963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.046 [2024-12-09 14:21:50.674988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:49.046 [2024-12-09 14:21:50.675015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:49.046 [2024-12-09 14:21:50.675033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.046 [2024-12-09 14:21:50.675652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.046 [2024-12-09 14:21:50.675787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:49.046 [2024-12-09 14:21:50.675845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:28:49.046 [2024-12-09 14:21:50.675869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.046 [2024-12-09 14:21:50.676464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.046 [2024-12-09 14:21:50.676598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:49.046 [2024-12-09 14:21:50.676695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:28:49.046 [2024-12-09 14:21:50.676721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.046 [2024-12-09 14:21:50.692692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.046 [2024-12-09 14:21:50.692847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:49.046 [2024-12-09 14:21:50.692905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.903 ms 00:28:49.046 [2024-12-09 14:21:50.692929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.046 [2024-12-09 14:21:50.707302] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:49.046 [2024-12-09 14:21:50.707472] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:49.046 [2024-12-09 14:21:50.707549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.046 [2024-12-09 14:21:50.707572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:49.046 [2024-12-09 14:21:50.707595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.490 ms 00:28:49.046 [2024-12-09 14:21:50.707614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.046 [2024-12-09 14:21:50.738095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.046 [2024-12-09 14:21:50.738246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:49.047 [2024-12-09 14:21:50.738307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.427 ms 00:28:49.047 [2024-12-09 14:21:50.738330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.047 [2024-12-09 14:21:50.751304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.047 [2024-12-09 14:21:50.751452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:49.047 [2024-12-09 14:21:50.751507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.841 ms 00:28:49.047 [2024-12-09 14:21:50.751528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.047 [2024-12-09 14:21:50.764412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.047 [2024-12-09 14:21:50.764609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:49.047 [2024-12-09 14:21:50.764631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.463 ms 00:28:49.047 [2024-12-09 14:21:50.764639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.047 [2024-12-09 14:21:50.765299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.047 [2024-12-09 14:21:50.765326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:49.047 [2024-12-09 14:21:50.765340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:28:49.047 [2024-12-09 14:21:50.765348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.047 [2024-12-09 14:21:50.829195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.047 [2024-12-09 14:21:50.829260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:49.047 [2024-12-09 14:21:50.829281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.825 ms 00:28:49.047 [2024-12-09 14:21:50.829292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.308 [2024-12-09 14:21:50.840325] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:49.308 [2024-12-09 14:21:50.843478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.308 [2024-12-09 14:21:50.843520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:49.308 [2024-12-09 14:21:50.843531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.129 ms 00:28:49.308 [2024-12-09 14:21:50.843553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.308 [2024-12-09 14:21:50.843643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.308 [2024-12-09 14:21:50.843655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:49.308 [2024-12-09 14:21:50.843667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:49.308 [2024-12-09 14:21:50.843676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.308 [2024-12-09 14:21:50.844483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.308 [2024-12-09 14:21:50.844520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:49.308 [2024-12-09 14:21:50.844532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:28:49.308 [2024-12-09 14:21:50.844556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.308 [2024-12-09 14:21:50.844587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.308 [2024-12-09 14:21:50.844597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:49.308 [2024-12-09 14:21:50.844608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:49.308 [2024-12-09 14:21:50.844621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.308 [2024-12-09 14:21:50.844663] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:49.308 [2024-12-09 14:21:50.844675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.308 [2024-12-09 14:21:50.844685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:49.308 [2024-12-09 14:21:50.844695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:49.308 [2024-12-09 14:21:50.844704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.308 [2024-12-09 14:21:50.870817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.308 [2024-12-09 14:21:50.870999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:49.308 [2024-12-09 14:21:50.871029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.092 ms 00:28:49.308 [2024-12-09 14:21:50.871038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.308 [2024-12-09 14:21:50.871115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.308 [2024-12-09 14:21:50.871125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:49.308 [2024-12-09 14:21:50.871134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:49.308 [2024-12-09 14:21:50.871143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.308 [2024-12-09 14:21:50.872428] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.861 ms, result 0 00:28:50.695  [2024-12-09T14:21:53.121Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-09T14:21:54.106Z] Copying: 31/1024 [MB] (10 MBps) [2024-12-09T14:21:55.492Z] Copying: 42052/1048576 [kB] (10208 kBps) [2024-12-09T14:21:56.064Z] Copying: 57/1024 [MB] (16 MBps) [2024-12-09T14:21:57.452Z] Copying: 67/1024 [MB] (10 MBps) [2024-12-09T14:21:58.393Z] Copying: 81/1024 [MB] (14 MBps) [2024-12-09T14:21:59.338Z] Copying: 94/1024 [MB] (12 MBps) [2024-12-09T14:22:00.280Z] Copying: 106/1024 [MB] (11 MBps) [2024-12-09T14:22:01.226Z] Copying: 123/1024 [MB] (16 MBps) [2024-12-09T14:22:02.167Z] Copying: 143/1024 [MB] (19 MBps) [2024-12-09T14:22:03.108Z] Copying: 162/1024 [MB] (18 MBps) [2024-12-09T14:22:04.053Z] Copying: 181/1024 [MB] (19 MBps) [2024-12-09T14:22:05.441Z] Copying: 195776/1048576 [kB] (10224 kBps) [2024-12-09T14:22:06.385Z] Copying: 201/1024 [MB] (10 MBps) [2024-12-09T14:22:07.329Z] Copying: 212/1024 [MB] (10 MBps) [2024-12-09T14:22:08.270Z] Copying: 226/1024 [MB] (14 MBps) [2024-12-09T14:22:09.235Z] Copying: 242/1024 [MB] (16 MBps) [2024-12-09T14:22:10.180Z] Copying: 252/1024 [MB] (10 MBps) [2024-12-09T14:22:11.123Z] Copying: 268908/1048576 [kB] (10140 kBps) [2024-12-09T14:22:12.069Z] Copying: 279036/1048576 [kB] (10128 kBps) [2024-12-09T14:22:13.457Z] Copying: 283/1024 [MB] (10 MBps) [2024-12-09T14:22:14.397Z] Copying: 295/1024 [MB] (11 MBps) [2024-12-09T14:22:15.341Z] Copying: 312/1024 [MB] (16 MBps) [2024-12-09T14:22:16.285Z] Copying: 327/1024 [MB] (15 MBps) [2024-12-09T14:22:17.236Z] Copying: 342/1024 [MB] (14 MBps) [2024-12-09T14:22:18.177Z] Copying: 359/1024 [MB] (17 MBps) [2024-12-09T14:22:19.122Z] Copying: 371/1024 [MB] (12 MBps) [2024-12-09T14:22:20.063Z] Copying: 388/1024 [MB] (16 MBps) [2024-12-09T14:22:21.447Z] Copying: 403/1024 [MB] (15 MBps) [2024-12-09T14:22:22.391Z] Copying: 416/1024 [MB] (13 MBps) [2024-12-09T14:22:23.335Z] Copying: 426/1024 [MB] (10 MBps) [2024-12-09T14:22:24.288Z] Copying: 436/1024 [MB] (10 MBps) [2024-12-09T14:22:25.307Z] Copying: 447/1024 [MB] (10 MBps) [2024-12-09T14:22:26.252Z] Copying: 457/1024 [MB] (10 MBps) [2024-12-09T14:22:27.195Z] Copying: 468/1024 [MB] (10 MBps) [2024-12-09T14:22:28.139Z] Copying: 478/1024 [MB] (10 MBps) [2024-12-09T14:22:29.083Z] Copying: 489/1024 [MB] (10 MBps) [2024-12-09T14:22:30.471Z] Copying: 499/1024 [MB] (10 MBps) [2024-12-09T14:22:31.417Z] Copying: 510/1024 [MB] (10 MBps) [2024-12-09T14:22:32.360Z] Copying: 523/1024 [MB] (13 MBps) [2024-12-09T14:22:33.303Z] Copying: 538/1024 [MB] (15 MBps) [2024-12-09T14:22:34.246Z] Copying: 558/1024 [MB] (19 MBps) [2024-12-09T14:22:35.198Z] Copying: 569/1024 [MB] (11 MBps) [2024-12-09T14:22:36.142Z] Copying: 587/1024 [MB] (17 MBps) [2024-12-09T14:22:37.086Z] Copying: 604/1024 [MB] (17 MBps) [2024-12-09T14:22:38.481Z] Copying: 622/1024 [MB] (18 MBps) [2024-12-09T14:22:39.107Z] Copying: 636/1024 [MB] (14 MBps) [2024-12-09T14:22:40.488Z] Copying: 654/1024 [MB] (17 MBps) [2024-12-09T14:22:41.058Z] Copying: 670/1024 [MB] (15 MBps) [2024-12-09T14:22:42.443Z] Copying: 690/1024 [MB] (20 MBps) [2024-12-09T14:22:43.387Z] Copying: 714/1024 [MB] (23 MBps) [2024-12-09T14:22:44.326Z] Copying: 734/1024 [MB] (20 MBps) [2024-12-09T14:22:45.273Z] Copying: 752/1024 [MB] (18 MBps) [2024-12-09T14:22:46.220Z] Copying: 775/1024 [MB] (23 MBps) [2024-12-09T14:22:47.160Z] Copying: 796/1024 [MB] (21 MBps) [2024-12-09T14:22:48.102Z] Copying: 821/1024 [MB] (25 MBps) [2024-12-09T14:22:49.489Z] Copying: 835/1024 [MB] (13 MBps) [2024-12-09T14:22:50.063Z] Copying: 848/1024 [MB] (13 MBps) [2024-12-09T14:22:51.448Z] Copying: 867/1024 [MB] (18 MBps) [2024-12-09T14:22:52.388Z] Copying: 882/1024 [MB] (14 MBps) [2024-12-09T14:22:53.333Z] Copying: 899/1024 [MB] (17 MBps) [2024-12-09T14:22:54.298Z] Copying: 919/1024 [MB] (20 MBps) [2024-12-09T14:22:55.245Z] Copying: 941/1024 [MB] (21 MBps) [2024-12-09T14:22:56.188Z] Copying: 968/1024 [MB] (27 MBps) [2024-12-09T14:22:57.130Z] Copying: 984/1024 [MB] (15 MBps) [2024-12-09T14:22:58.072Z] Copying: 1006/1024 [MB] (22 MBps) [2024-12-09T14:22:58.658Z] Copying: 1018/1024 [MB] (11 MBps) [2024-12-09T14:22:58.658Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-09 14:22:58.633333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.864 [2024-12-09 14:22:58.633423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:56.864 [2024-12-09 14:22:58.633441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:56.864 [2024-12-09 14:22:58.633452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.864 [2024-12-09 14:22:58.633479] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:56.864 [2024-12-09 14:22:58.636898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.864 [2024-12-09 14:22:58.636950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:56.864 [2024-12-09 14:22:58.636963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.400 ms 00:29:56.864 [2024-12-09 14:22:58.636972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.864 [2024-12-09 14:22:58.637260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.864 [2024-12-09 14:22:58.637272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:56.864 [2024-12-09 14:22:58.637283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:29:56.864 [2024-12-09 14:22:58.637292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.864 [2024-12-09 14:22:58.642084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.864 [2024-12-09 14:22:58.642132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:56.864 [2024-12-09 14:22:58.642153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:29:56.864 [2024-12-09 14:22:58.642163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.864 [2024-12-09 14:22:58.648857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.864 [2024-12-09 14:22:58.649082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:56.864 [2024-12-09 14:22:58.649119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.666 ms 00:29:56.864 [2024-12-09 14:22:58.649128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.677150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.126 [2024-12-09 14:22:58.677207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:57.126 [2024-12-09 14:22:58.677222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.935 ms 00:29:57.126 [2024-12-09 14:22:58.677231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.693278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.126 [2024-12-09 14:22:58.693330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:57.126 [2024-12-09 14:22:58.693346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.991 ms 00:29:57.126 [2024-12-09 14:22:58.693364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.698109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.126 [2024-12-09 14:22:58.698160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:57.126 [2024-12-09 14:22:58.698172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.688 ms 00:29:57.126 [2024-12-09 14:22:58.698181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.725319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.126 [2024-12-09 14:22:58.725367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:57.126 [2024-12-09 14:22:58.725380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.122 ms 00:29:57.126 [2024-12-09 14:22:58.725388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.751877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.126 [2024-12-09 14:22:58.752084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:57.126 [2024-12-09 14:22:58.752108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.437 ms 00:29:57.126 [2024-12-09 14:22:58.752116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.777873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.126 [2024-12-09 14:22:58.777933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:57.126 [2024-12-09 14:22:58.777948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.430 ms 00:29:57.126 [2024-12-09 14:22:58.777955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.803313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.126 [2024-12-09 14:22:58.803522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:57.126 [2024-12-09 14:22:58.803570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.271 ms 00:29:57.126 [2024-12-09 14:22:58.803578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.803723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:57.126 [2024-12-09 14:22:58.803771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:57.126 [2024-12-09 14:22:58.803782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:57.126 [2024-12-09 14:22:58.803791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.803996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:57.126 [2024-12-09 14:22:58.804591] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:57.126 [2024-12-09 14:22:58.804599] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a702aa5-c8aa-46c4-9572-8d77a33e3e76 00:29:57.126 [2024-12-09 14:22:58.804609] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:57.126 [2024-12-09 14:22:58.804617] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:57.126 [2024-12-09 14:22:58.804625] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:57.126 [2024-12-09 14:22:58.804634] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:57.126 [2024-12-09 14:22:58.804649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:57.126 [2024-12-09 14:22:58.804658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:57.126 [2024-12-09 14:22:58.804666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:57.126 [2024-12-09 14:22:58.804674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:57.126 [2024-12-09 14:22:58.804681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:57.126 [2024-12-09 14:22:58.804689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.126 [2024-12-09 14:22:58.804698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:57.126 [2024-12-09 14:22:58.804711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:29:57.126 [2024-12-09 14:22:58.804719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.818572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.126 [2024-12-09 14:22:58.818750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:57.126 [2024-12-09 14:22:58.818769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.814 ms 00:29:57.126 [2024-12-09 14:22:58.818777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.819176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.126 [2024-12-09 14:22:58.819187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:57.126 [2024-12-09 14:22:58.819197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:29:57.126 [2024-12-09 14:22:58.819205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.856139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.126 [2024-12-09 14:22:58.856191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:57.126 [2024-12-09 14:22:58.856204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.126 [2024-12-09 14:22:58.856214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.856284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.126 [2024-12-09 14:22:58.856295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:57.126 [2024-12-09 14:22:58.856305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.126 [2024-12-09 14:22:58.856314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.856410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.126 [2024-12-09 14:22:58.856421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:57.126 [2024-12-09 14:22:58.856431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.126 [2024-12-09 14:22:58.856440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.126 [2024-12-09 14:22:58.856458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.126 [2024-12-09 14:22:58.856473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:57.126 [2024-12-09 14:22:58.856481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.126 [2024-12-09 14:22:58.856488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.388 [2024-12-09 14:22:58.941451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.388 [2024-12-09 14:22:58.941510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:57.388 [2024-12-09 14:22:58.941524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.388 [2024-12-09 14:22:58.941533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.388 [2024-12-09 14:22:59.010790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.388 [2024-12-09 14:22:59.010854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:57.388 [2024-12-09 14:22:59.010867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.388 [2024-12-09 14:22:59.010876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.388 [2024-12-09 14:22:59.010938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.388 [2024-12-09 14:22:59.010948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.388 [2024-12-09 14:22:59.010958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.388 [2024-12-09 14:22:59.010967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.388 [2024-12-09 14:22:59.011024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.388 [2024-12-09 14:22:59.011035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.388 [2024-12-09 14:22:59.011049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.388 [2024-12-09 14:22:59.011058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.388 [2024-12-09 14:22:59.011157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.388 [2024-12-09 14:22:59.011167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.388 [2024-12-09 14:22:59.011176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.388 [2024-12-09 14:22:59.011185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.388 [2024-12-09 14:22:59.011219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.388 [2024-12-09 14:22:59.011229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:57.388 [2024-12-09 14:22:59.011238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.388 [2024-12-09 14:22:59.011250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.388 [2024-12-09 14:22:59.011293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.388 [2024-12-09 14:22:59.011304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.388 [2024-12-09 14:22:59.011313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.388 [2024-12-09 14:22:59.011321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.388 [2024-12-09 14:22:59.011372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.388 [2024-12-09 14:22:59.011383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.388 [2024-12-09 14:22:59.011395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.388 [2024-12-09 14:22:59.011404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.388 [2024-12-09 14:22:59.011581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.180 ms, result 0 00:29:58.331 00:29:58.331 00:29:58.331 14:22:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:00.245 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:00.245 14:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:00.245 14:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:00.245 14:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:00.245 14:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:00.507 14:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:00.507 14:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:00.507 14:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:00.507 Process with pid 80175 is not found 00:30:00.507 14:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80175 00:30:00.507 14:23:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80175 ']' 00:30:00.507 14:23:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80175 00:30:00.507 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80175) - No such process 00:30:00.507 14:23:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80175 is not found' 00:30:00.507 14:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:00.769 Remove shared memory files 00:30:00.769 14:23:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:00.769 14:23:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:00.769 14:23:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:00.769 14:23:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:00.769 14:23:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:00.769 14:23:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:00.769 14:23:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:00.769 ************************************ 00:30:00.769 END TEST ftl_dirty_shutdown 00:30:00.769 ************************************ 00:30:00.769 00:30:00.769 real 4m16.206s 00:30:00.769 user 4m32.244s 00:30:00.769 sys 0m23.515s 00:30:00.769 14:23:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.769 14:23:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:01.031 14:23:02 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:01.031 14:23:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:01.031 14:23:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.031 14:23:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:01.031 ************************************ 00:30:01.031 START TEST ftl_upgrade_shutdown 00:30:01.031 ************************************ 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:01.031 * Looking for test storage... 00:30:01.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:01.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.031 --rc genhtml_branch_coverage=1 00:30:01.031 --rc genhtml_function_coverage=1 00:30:01.031 --rc genhtml_legend=1 00:30:01.031 --rc geninfo_all_blocks=1 00:30:01.031 --rc geninfo_unexecuted_blocks=1 00:30:01.031 00:30:01.031 ' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:01.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.031 --rc genhtml_branch_coverage=1 00:30:01.031 --rc genhtml_function_coverage=1 00:30:01.031 --rc genhtml_legend=1 00:30:01.031 --rc geninfo_all_blocks=1 00:30:01.031 --rc geninfo_unexecuted_blocks=1 00:30:01.031 00:30:01.031 ' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:01.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.031 --rc genhtml_branch_coverage=1 00:30:01.031 --rc genhtml_function_coverage=1 00:30:01.031 --rc genhtml_legend=1 00:30:01.031 --rc geninfo_all_blocks=1 00:30:01.031 --rc geninfo_unexecuted_blocks=1 00:30:01.031 00:30:01.031 ' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:01.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.031 --rc genhtml_branch_coverage=1 00:30:01.031 --rc genhtml_function_coverage=1 00:30:01.031 --rc genhtml_legend=1 00:30:01.031 --rc geninfo_all_blocks=1 00:30:01.031 --rc geninfo_unexecuted_blocks=1 00:30:01.031 00:30:01.031 ' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:01.031 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82912 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82912 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82912 ']' 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:01.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.032 14:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:01.292 [2024-12-09 14:23:02.849075] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:01.292 [2024-12-09 14:23:02.849482] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82912 ] 00:30:01.292 [2024-12-09 14:23:03.014389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.553 [2024-12-09 14:23:03.141614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:02.125 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:02.126 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:02.126 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:02.126 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:02.126 14:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:02.387 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:02.387 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:02.387 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:02.387 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:02.387 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:02.387 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:02.387 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:02.387 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:02.647 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:02.647 { 00:30:02.647 "name": "basen1", 00:30:02.647 "aliases": [ 00:30:02.647 "7b61e2c6-9e26-452e-bdcd-78fd7939fdee" 00:30:02.647 ], 00:30:02.647 "product_name": "NVMe disk", 00:30:02.647 "block_size": 4096, 00:30:02.647 "num_blocks": 1310720, 00:30:02.647 "uuid": "7b61e2c6-9e26-452e-bdcd-78fd7939fdee", 00:30:02.647 "numa_id": -1, 00:30:02.647 "assigned_rate_limits": { 00:30:02.647 "rw_ios_per_sec": 0, 00:30:02.647 "rw_mbytes_per_sec": 0, 00:30:02.647 "r_mbytes_per_sec": 0, 00:30:02.647 "w_mbytes_per_sec": 0 00:30:02.647 }, 00:30:02.647 "claimed": true, 00:30:02.647 "claim_type": "read_many_write_one", 00:30:02.647 "zoned": false, 00:30:02.647 "supported_io_types": { 00:30:02.647 "read": true, 00:30:02.647 "write": true, 00:30:02.647 "unmap": true, 00:30:02.647 "flush": true, 00:30:02.647 "reset": true, 00:30:02.647 "nvme_admin": true, 00:30:02.647 "nvme_io": true, 00:30:02.647 "nvme_io_md": false, 00:30:02.647 "write_zeroes": true, 00:30:02.647 "zcopy": false, 00:30:02.647 "get_zone_info": false, 00:30:02.647 "zone_management": false, 00:30:02.647 "zone_append": false, 00:30:02.647 "compare": true, 00:30:02.647 "compare_and_write": false, 00:30:02.647 "abort": true, 00:30:02.647 "seek_hole": false, 00:30:02.647 "seek_data": false, 00:30:02.647 "copy": true, 00:30:02.647 "nvme_iov_md": false 00:30:02.647 }, 00:30:02.647 "driver_specific": { 00:30:02.647 "nvme": [ 00:30:02.647 { 00:30:02.647 "pci_address": "0000:00:11.0", 00:30:02.647 "trid": { 00:30:02.647 "trtype": "PCIe", 00:30:02.647 "traddr": "0000:00:11.0" 00:30:02.647 }, 00:30:02.647 "ctrlr_data": { 00:30:02.647 "cntlid": 0, 00:30:02.647 "vendor_id": "0x1b36", 00:30:02.647 "model_number": "QEMU NVMe Ctrl", 00:30:02.647 "serial_number": "12341", 00:30:02.647 "firmware_revision": "8.0.0", 00:30:02.647 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:02.647 "oacs": { 00:30:02.647 "security": 0, 00:30:02.647 "format": 1, 00:30:02.647 "firmware": 0, 00:30:02.647 "ns_manage": 1 00:30:02.647 }, 00:30:02.647 "multi_ctrlr": false, 00:30:02.647 "ana_reporting": false 00:30:02.647 }, 00:30:02.647 "vs": { 00:30:02.647 "nvme_version": "1.4" 00:30:02.647 }, 00:30:02.647 "ns_data": { 00:30:02.647 "id": 1, 00:30:02.647 "can_share": false 00:30:02.647 } 00:30:02.647 } 00:30:02.647 ], 00:30:02.647 "mp_policy": "active_passive" 00:30:02.647 } 00:30:02.647 } 00:30:02.647 ]' 00:30:02.647 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:02.647 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:02.647 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:02.647 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:02.648 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:02.648 14:23:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:02.648 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:02.648 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:02.648 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:02.648 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:02.648 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:02.909 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=7da6cf08-9e37-427d-bbbd-ece54f2bee80 00:30:02.909 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:02.909 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7da6cf08-9e37-427d-bbbd-ece54f2bee80 00:30:03.171 14:23:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:03.439 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=d982e825-2778-4f37-8912-46a67c6ec904 00:30:03.439 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d982e825-2778-4f37-8912-46a67c6ec904 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=571ecfc7-65d0-4164-b147-07ed97a38d1a 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 571ecfc7-65d0-4164-b147-07ed97a38d1a ]] 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 571ecfc7-65d0-4164-b147-07ed97a38d1a 5120 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=571ecfc7-65d0-4164-b147-07ed97a38d1a 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 571ecfc7-65d0-4164-b147-07ed97a38d1a 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=571ecfc7-65d0-4164-b147-07ed97a38d1a 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 571ecfc7-65d0-4164-b147-07ed97a38d1a 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:03.700 { 00:30:03.700 "name": "571ecfc7-65d0-4164-b147-07ed97a38d1a", 00:30:03.700 "aliases": [ 00:30:03.700 "lvs/basen1p0" 00:30:03.700 ], 00:30:03.700 "product_name": "Logical Volume", 00:30:03.700 "block_size": 4096, 00:30:03.700 "num_blocks": 5242880, 00:30:03.700 "uuid": "571ecfc7-65d0-4164-b147-07ed97a38d1a", 00:30:03.700 "assigned_rate_limits": { 00:30:03.700 "rw_ios_per_sec": 0, 00:30:03.700 "rw_mbytes_per_sec": 0, 00:30:03.700 "r_mbytes_per_sec": 0, 00:30:03.700 "w_mbytes_per_sec": 0 00:30:03.700 }, 00:30:03.700 "claimed": false, 00:30:03.700 "zoned": false, 00:30:03.700 "supported_io_types": { 00:30:03.700 "read": true, 00:30:03.700 "write": true, 00:30:03.700 "unmap": true, 00:30:03.700 "flush": false, 00:30:03.700 "reset": true, 00:30:03.700 "nvme_admin": false, 00:30:03.700 "nvme_io": false, 00:30:03.700 "nvme_io_md": false, 00:30:03.700 "write_zeroes": true, 00:30:03.700 "zcopy": false, 00:30:03.700 "get_zone_info": false, 00:30:03.700 "zone_management": false, 00:30:03.700 "zone_append": false, 00:30:03.700 "compare": false, 00:30:03.700 "compare_and_write": false, 00:30:03.700 "abort": false, 00:30:03.700 "seek_hole": true, 00:30:03.700 "seek_data": true, 00:30:03.700 "copy": false, 00:30:03.700 "nvme_iov_md": false 00:30:03.700 }, 00:30:03.700 "driver_specific": { 00:30:03.700 "lvol": { 00:30:03.700 "lvol_store_uuid": "d982e825-2778-4f37-8912-46a67c6ec904", 00:30:03.700 "base_bdev": "basen1", 00:30:03.700 "thin_provision": true, 00:30:03.700 "num_allocated_clusters": 0, 00:30:03.700 "snapshot": false, 00:30:03.700 "clone": false, 00:30:03.700 "esnap_clone": false 00:30:03.700 } 00:30:03.700 } 00:30:03.700 } 00:30:03.700 ]' 00:30:03.700 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:03.961 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:03.961 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:03.961 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:03.961 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:03.961 14:23:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:03.961 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:03.961 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:03.961 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:04.222 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:04.222 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:04.222 14:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:04.481 14:23:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:04.481 14:23:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:04.481 14:23:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 571ecfc7-65d0-4164-b147-07ed97a38d1a -c cachen1p0 --l2p_dram_limit 2 00:30:04.481 [2024-12-09 14:23:06.220989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.481 [2024-12-09 14:23:06.221157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:04.481 [2024-12-09 14:23:06.221177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:04.481 [2024-12-09 14:23:06.221184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.481 [2024-12-09 14:23:06.221242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.481 [2024-12-09 14:23:06.221250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:04.481 [2024-12-09 14:23:06.221258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:30:04.481 [2024-12-09 14:23:06.221264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.481 [2024-12-09 14:23:06.221280] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:04.481 [2024-12-09 14:23:06.221852] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:04.481 [2024-12-09 14:23:06.221869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.481 [2024-12-09 14:23:06.221875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:04.481 [2024-12-09 14:23:06.221883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.590 ms 00:30:04.481 [2024-12-09 14:23:06.221889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.481 [2024-12-09 14:23:06.221942] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 173ebc36-92e0-4d65-9c6e-5117fb3df056 00:30:04.481 [2024-12-09 14:23:06.222859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.481 [2024-12-09 14:23:06.222876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:04.481 [2024-12-09 14:23:06.222883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:04.481 [2024-12-09 14:23:06.222890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.481 [2024-12-09 14:23:06.227434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.481 [2024-12-09 14:23:06.227464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:04.481 [2024-12-09 14:23:06.227472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.514 ms 00:30:04.481 [2024-12-09 14:23:06.227478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.481 [2024-12-09 14:23:06.227508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.481 [2024-12-09 14:23:06.227516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:04.481 [2024-12-09 14:23:06.227523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:04.482 [2024-12-09 14:23:06.227531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.482 [2024-12-09 14:23:06.227576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.482 [2024-12-09 14:23:06.227586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:04.482 [2024-12-09 14:23:06.227610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:04.482 [2024-12-09 14:23:06.227617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.482 [2024-12-09 14:23:06.227633] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:04.482 [2024-12-09 14:23:06.230449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.482 [2024-12-09 14:23:06.230474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:04.482 [2024-12-09 14:23:06.230483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.818 ms 00:30:04.482 [2024-12-09 14:23:06.230489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.482 [2024-12-09 14:23:06.230511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.482 [2024-12-09 14:23:06.230517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:04.482 [2024-12-09 14:23:06.230525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:04.482 [2024-12-09 14:23:06.230530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.482 [2024-12-09 14:23:06.230560] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:04.482 [2024-12-09 14:23:06.230670] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:04.482 [2024-12-09 14:23:06.230682] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:04.482 [2024-12-09 14:23:06.230690] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:04.482 [2024-12-09 14:23:06.230699] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:04.482 [2024-12-09 14:23:06.230705] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:04.482 [2024-12-09 14:23:06.230713] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:04.482 [2024-12-09 14:23:06.230719] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:04.482 [2024-12-09 14:23:06.230728] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:04.482 [2024-12-09 14:23:06.230733] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:04.482 [2024-12-09 14:23:06.230740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.482 [2024-12-09 14:23:06.230746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:04.482 [2024-12-09 14:23:06.230753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:30:04.482 [2024-12-09 14:23:06.230759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.482 [2024-12-09 14:23:06.230824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.482 [2024-12-09 14:23:06.230835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:04.482 [2024-12-09 14:23:06.230842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:04.482 [2024-12-09 14:23:06.230848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.482 [2024-12-09 14:23:06.230926] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:04.482 [2024-12-09 14:23:06.231022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:04.482 [2024-12-09 14:23:06.231035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:04.482 [2024-12-09 14:23:06.231041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:04.482 [2024-12-09 14:23:06.231054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:04.482 [2024-12-09 14:23:06.231066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:04.482 [2024-12-09 14:23:06.231073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:04.482 [2024-12-09 14:23:06.231078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:04.482 [2024-12-09 14:23:06.231091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:04.482 [2024-12-09 14:23:06.231097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:04.482 [2024-12-09 14:23:06.231109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:04.482 [2024-12-09 14:23:06.231114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:04.482 [2024-12-09 14:23:06.231128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:04.482 [2024-12-09 14:23:06.231134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:04.482 [2024-12-09 14:23:06.231146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:04.482 [2024-12-09 14:23:06.231151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:04.482 [2024-12-09 14:23:06.231157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:04.482 [2024-12-09 14:23:06.231162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:04.482 [2024-12-09 14:23:06.231168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:04.482 [2024-12-09 14:23:06.231173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:04.482 [2024-12-09 14:23:06.231180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:04.482 [2024-12-09 14:23:06.231184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:04.482 [2024-12-09 14:23:06.231191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:04.482 [2024-12-09 14:23:06.231196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:04.482 [2024-12-09 14:23:06.231202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:04.482 [2024-12-09 14:23:06.231207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:04.482 [2024-12-09 14:23:06.231214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:04.482 [2024-12-09 14:23:06.231219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:04.482 [2024-12-09 14:23:06.231230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:04.482 [2024-12-09 14:23:06.231237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:04.482 [2024-12-09 14:23:06.231249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:04.482 [2024-12-09 14:23:06.231265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:04.482 [2024-12-09 14:23:06.231271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231276] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:04.482 [2024-12-09 14:23:06.231283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:04.482 [2024-12-09 14:23:06.231288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:04.482 [2024-12-09 14:23:06.231295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.482 [2024-12-09 14:23:06.231301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:04.482 [2024-12-09 14:23:06.231309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:04.482 [2024-12-09 14:23:06.231314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:04.482 [2024-12-09 14:23:06.231321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:04.482 [2024-12-09 14:23:06.231326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:04.482 [2024-12-09 14:23:06.231332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:04.482 [2024-12-09 14:23:06.231339] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:04.482 [2024-12-09 14:23:06.231349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:04.482 [2024-12-09 14:23:06.231362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:04.482 [2024-12-09 14:23:06.231379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:04.482 [2024-12-09 14:23:06.231386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:04.482 [2024-12-09 14:23:06.231391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:04.482 [2024-12-09 14:23:06.231399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:04.482 [2024-12-09 14:23:06.231441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:04.482 [2024-12-09 14:23:06.231449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:04.482 [2024-12-09 14:23:06.231461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:04.482 [2024-12-09 14:23:06.231467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:04.482 [2024-12-09 14:23:06.231473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:04.482 [2024-12-09 14:23:06.231479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.482 [2024-12-09 14:23:06.231486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:04.482 [2024-12-09 14:23:06.231492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.606 ms 00:30:04.482 [2024-12-09 14:23:06.231499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.482 [2024-12-09 14:23:06.231555] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:04.482 [2024-12-09 14:23:06.231570] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:08.696 [2024-12-09 14:23:10.149960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.696 [2024-12-09 14:23:10.150067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:08.696 [2024-12-09 14:23:10.150087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3918.388 ms 00:30:08.696 [2024-12-09 14:23:10.150099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.696 [2024-12-09 14:23:10.181716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.696 [2024-12-09 14:23:10.181783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:08.696 [2024-12-09 14:23:10.181799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.367 ms 00:30:08.696 [2024-12-09 14:23:10.181810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.696 [2024-12-09 14:23:10.181901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.696 [2024-12-09 14:23:10.181914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:08.696 [2024-12-09 14:23:10.181923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:08.696 [2024-12-09 14:23:10.181940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.696 [2024-12-09 14:23:10.216648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.696 [2024-12-09 14:23:10.216816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:08.696 [2024-12-09 14:23:10.216835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.673 ms 00:30:08.696 [2024-12-09 14:23:10.216847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.696 [2024-12-09 14:23:10.216879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.696 [2024-12-09 14:23:10.216892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:08.696 [2024-12-09 14:23:10.216900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:08.696 [2024-12-09 14:23:10.216909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.696 [2024-12-09 14:23:10.217358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.696 [2024-12-09 14:23:10.217382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:08.696 [2024-12-09 14:23:10.217397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.390 ms 00:30:08.696 [2024-12-09 14:23:10.217406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.696 [2024-12-09 14:23:10.217446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.696 [2024-12-09 14:23:10.217457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:08.696 [2024-12-09 14:23:10.217468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:08.696 [2024-12-09 14:23:10.217479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.696 [2024-12-09 14:23:10.232142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.696 [2024-12-09 14:23:10.232267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:08.696 [2024-12-09 14:23:10.232282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.645 ms 00:30:08.696 [2024-12-09 14:23:10.232291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.696 [2024-12-09 14:23:10.259762] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:08.696 [2024-12-09 14:23:10.260708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.696 [2024-12-09 14:23:10.260740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:08.697 [2024-12-09 14:23:10.260755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.346 ms 00:30:08.697 [2024-12-09 14:23:10.260765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.697 [2024-12-09 14:23:10.287292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.697 [2024-12-09 14:23:10.287329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:08.697 [2024-12-09 14:23:10.287343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.487 ms 00:30:08.697 [2024-12-09 14:23:10.287351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.697 [2024-12-09 14:23:10.287429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.697 [2024-12-09 14:23:10.287441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:08.697 [2024-12-09 14:23:10.287455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:30:08.697 [2024-12-09 14:23:10.287462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.697 [2024-12-09 14:23:10.310784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.697 [2024-12-09 14:23:10.310816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:08.697 [2024-12-09 14:23:10.310829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.277 ms 00:30:08.697 [2024-12-09 14:23:10.310837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.697 [2024-12-09 14:23:10.333885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.697 [2024-12-09 14:23:10.334005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:08.697 [2024-12-09 14:23:10.334025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.008 ms 00:30:08.697 [2024-12-09 14:23:10.334033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.697 [2024-12-09 14:23:10.334596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.697 [2024-12-09 14:23:10.334612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:08.697 [2024-12-09 14:23:10.334623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:30:08.697 [2024-12-09 14:23:10.334632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.697 [2024-12-09 14:23:10.408350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.697 [2024-12-09 14:23:10.408378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:08.697 [2024-12-09 14:23:10.408393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.683 ms 00:30:08.697 [2024-12-09 14:23:10.408402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.697 [2024-12-09 14:23:10.433056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.697 [2024-12-09 14:23:10.433102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:08.697 [2024-12-09 14:23:10.433116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.584 ms 00:30:08.697 [2024-12-09 14:23:10.433124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.958 [2024-12-09 14:23:10.568318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.958 [2024-12-09 14:23:10.568466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:08.958 [2024-12-09 14:23:10.568486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 135.154 ms 00:30:08.958 [2024-12-09 14:23:10.568493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.958 [2024-12-09 14:23:10.592766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.958 [2024-12-09 14:23:10.592799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:08.958 [2024-12-09 14:23:10.592812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.236 ms 00:30:08.958 [2024-12-09 14:23:10.592819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.958 [2024-12-09 14:23:10.592859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.958 [2024-12-09 14:23:10.592868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:08.958 [2024-12-09 14:23:10.592881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:08.958 [2024-12-09 14:23:10.592888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.958 [2024-12-09 14:23:10.592961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.958 [2024-12-09 14:23:10.592972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:08.958 [2024-12-09 14:23:10.592982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:08.958 [2024-12-09 14:23:10.592989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.958 [2024-12-09 14:23:10.593825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4372.402 ms, result 0 00:30:08.958 { 00:30:08.958 "name": "ftl", 00:30:08.958 "uuid": "173ebc36-92e0-4d65-9c6e-5117fb3df056" 00:30:08.958 } 00:30:08.958 14:23:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:09.218 [2024-12-09 14:23:10.789276] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.218 14:23:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:09.218 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:09.478 [2024-12-09 14:23:11.189725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:09.478 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:09.740 [2024-12-09 14:23:11.382026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:09.740 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:10.001 Fill FTL, iteration 1 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:10.001 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83038 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83038 /var/tmp/spdk.tgt.sock 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83038 ']' 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:10.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:10.002 14:23:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:10.263 [2024-12-09 14:23:11.799784] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:10.263 [2024-12-09 14:23:11.800011] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83038 ] 00:30:10.263 [2024-12-09 14:23:11.957283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.263 [2024-12-09 14:23:12.055098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.200 14:23:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.200 14:23:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:11.200 14:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:11.200 ftln1 00:30:11.200 14:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:11.200 14:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83038 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83038 ']' 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83038 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83038 00:30:11.458 killing process with pid 83038 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83038' 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83038 00:30:11.458 14:23:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83038 00:30:12.832 14:23:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:12.832 14:23:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:12.832 [2024-12-09 14:23:14.625053] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:12.832 [2024-12-09 14:23:14.625183] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83081 ] 00:30:13.091 [2024-12-09 14:23:14.783862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.091 [2024-12-09 14:23:14.877471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.470  [2024-12-09T14:23:17.637Z] Copying: 213/1024 [MB] (213 MBps) [2024-12-09T14:23:18.570Z] Copying: 471/1024 [MB] (258 MBps) [2024-12-09T14:23:19.504Z] Copying: 731/1024 [MB] (260 MBps) [2024-12-09T14:23:19.504Z] Copying: 1001/1024 [MB] (270 MBps) [2024-12-09T14:23:20.069Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:30:18.275 00:30:18.275 Calculate MD5 checksum, iteration 1 00:30:18.275 14:23:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:18.275 14:23:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:18.275 14:23:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:18.275 14:23:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:18.276 14:23:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:18.276 14:23:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:18.276 14:23:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:18.276 14:23:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:18.276 [2024-12-09 14:23:19.965454] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:18.276 [2024-12-09 14:23:19.965582] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83138 ] 00:30:18.534 [2024-12-09 14:23:20.124359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.534 [2024-12-09 14:23:20.218232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.910  [2024-12-09T14:23:22.270Z] Copying: 704/1024 [MB] (704 MBps) [2024-12-09T14:23:22.530Z] Copying: 1024/1024 [MB] (average 701 MBps) 00:30:20.736 00:30:20.736 14:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:20.736 14:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b5a62c44439cff340a8c5311cf4d6e4c 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:23.299 Fill FTL, iteration 2 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:23.299 14:23:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:23.299 [2024-12-09 14:23:24.531887] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:23.299 [2024-12-09 14:23:24.532514] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83190 ] 00:30:23.299 [2024-12-09 14:23:24.687444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.299 [2024-12-09 14:23:24.764738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.671  [2024-12-09T14:23:27.402Z] Copying: 259/1024 [MB] (259 MBps) [2024-12-09T14:23:28.336Z] Copying: 518/1024 [MB] (259 MBps) [2024-12-09T14:23:29.270Z] Copying: 781/1024 [MB] (263 MBps) [2024-12-09T14:23:29.840Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:30:28.046 00:30:28.046 Calculate MD5 checksum, iteration 2 00:30:28.046 14:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:28.046 14:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:28.046 14:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:28.046 14:23:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:28.046 14:23:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:28.046 14:23:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:28.046 14:23:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:28.046 14:23:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:28.046 [2024-12-09 14:23:29.694129] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:28.046 [2024-12-09 14:23:29.694245] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83248 ] 00:30:28.306 [2024-12-09 14:23:29.851469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.306 [2024-12-09 14:23:29.945452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.686  [2024-12-09T14:23:32.051Z] Copying: 707/1024 [MB] (707 MBps) [2024-12-09T14:23:32.991Z] Copying: 1024/1024 [MB] (average 706 MBps) 00:30:31.197 00:30:31.197 14:23:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:31.197 14:23:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:33.746 14:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:33.746 14:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=def51fc230c37f884a1e21c56b2b0d14 00:30:33.746 14:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:33.746 14:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:33.746 14:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:33.746 [2024-12-09 14:23:35.140889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.746 [2024-12-09 14:23:35.140930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:33.746 [2024-12-09 14:23:35.140941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:33.746 [2024-12-09 14:23:35.140948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.746 [2024-12-09 14:23:35.140966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.746 [2024-12-09 14:23:35.140975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:33.746 [2024-12-09 14:23:35.140982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:33.746 [2024-12-09 14:23:35.140987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.746 [2024-12-09 14:23:35.141003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.746 [2024-12-09 14:23:35.141009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:33.746 [2024-12-09 14:23:35.141016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:33.746 [2024-12-09 14:23:35.141021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.746 [2024-12-09 14:23:35.141070] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.171 ms, result 0 00:30:33.746 true 00:30:33.746 14:23:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:33.746 { 00:30:33.746 "name": "ftl", 00:30:33.746 "properties": [ 00:30:33.746 { 00:30:33.746 "name": "superblock_version", 00:30:33.746 "value": 5, 00:30:33.746 "read-only": true 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "name": "base_device", 00:30:33.746 "bands": [ 00:30:33.746 { 00:30:33.746 "id": 0, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 1, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 2, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 3, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 4, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 5, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 6, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 7, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 8, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 9, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 10, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 11, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 12, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 13, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 14, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 15, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 16, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 17, 00:30:33.746 "state": "FREE", 00:30:33.746 "validity": 0.0 00:30:33.746 } 00:30:33.746 ], 00:30:33.746 "read-only": true 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "name": "cache_device", 00:30:33.746 "type": "bdev", 00:30:33.746 "chunks": [ 00:30:33.746 { 00:30:33.746 "id": 0, 00:30:33.746 "state": "INACTIVE", 00:30:33.746 "utilization": 0.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 1, 00:30:33.746 "state": "CLOSED", 00:30:33.746 "utilization": 1.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 2, 00:30:33.746 "state": "CLOSED", 00:30:33.746 "utilization": 1.0 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 3, 00:30:33.746 "state": "OPEN", 00:30:33.746 "utilization": 0.001953125 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "id": 4, 00:30:33.746 "state": "OPEN", 00:30:33.746 "utilization": 0.0 00:30:33.746 } 00:30:33.746 ], 00:30:33.746 "read-only": true 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "name": "verbose_mode", 00:30:33.746 "value": true, 00:30:33.746 "unit": "", 00:30:33.746 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:33.746 }, 00:30:33.746 { 00:30:33.746 "name": "prep_upgrade_on_shutdown", 00:30:33.746 "value": false, 00:30:33.746 "unit": "", 00:30:33.746 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:33.746 } 00:30:33.746 ] 00:30:33.746 } 00:30:33.746 14:23:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:33.746 [2024-12-09 14:23:35.493204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.746 [2024-12-09 14:23:35.493240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:33.746 [2024-12-09 14:23:35.493251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:33.746 [2024-12-09 14:23:35.493257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.746 [2024-12-09 14:23:35.493275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.746 [2024-12-09 14:23:35.493281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:33.746 [2024-12-09 14:23:35.493288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:33.746 [2024-12-09 14:23:35.493293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.746 [2024-12-09 14:23:35.493307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.746 [2024-12-09 14:23:35.493313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:33.746 [2024-12-09 14:23:35.493319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:33.746 [2024-12-09 14:23:35.493324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.746 [2024-12-09 14:23:35.493370] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.155 ms, result 0 00:30:33.746 true 00:30:33.746 14:23:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:33.746 14:23:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:33.746 14:23:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:34.005 14:23:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:34.005 14:23:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:34.005 14:23:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:34.263 [2024-12-09 14:23:35.873525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.263 [2024-12-09 14:23:35.873659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:34.263 [2024-12-09 14:23:35.873704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:34.263 [2024-12-09 14:23:35.873722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.263 [2024-12-09 14:23:35.873753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.263 [2024-12-09 14:23:35.873769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:34.263 [2024-12-09 14:23:35.873784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:34.263 [2024-12-09 14:23:35.873798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.263 [2024-12-09 14:23:35.873822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.263 [2024-12-09 14:23:35.873838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:34.263 [2024-12-09 14:23:35.873853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:34.263 [2024-12-09 14:23:35.873895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.263 [2024-12-09 14:23:35.873953] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.414 ms, result 0 00:30:34.263 true 00:30:34.263 14:23:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:34.263 { 00:30:34.263 "name": "ftl", 00:30:34.263 "properties": [ 00:30:34.263 { 00:30:34.263 "name": "superblock_version", 00:30:34.263 "value": 5, 00:30:34.263 "read-only": true 00:30:34.263 }, 00:30:34.263 { 00:30:34.263 "name": "base_device", 00:30:34.263 "bands": [ 00:30:34.263 { 00:30:34.263 "id": 0, 00:30:34.263 "state": "FREE", 00:30:34.263 "validity": 0.0 00:30:34.263 }, 00:30:34.263 { 00:30:34.263 "id": 1, 00:30:34.263 "state": "FREE", 00:30:34.263 "validity": 0.0 00:30:34.263 }, 00:30:34.263 { 00:30:34.263 "id": 2, 00:30:34.263 "state": "FREE", 00:30:34.263 "validity": 0.0 00:30:34.263 }, 00:30:34.263 { 00:30:34.263 "id": 3, 00:30:34.263 "state": "FREE", 00:30:34.263 "validity": 0.0 00:30:34.263 }, 00:30:34.263 { 00:30:34.263 "id": 4, 00:30:34.263 "state": "FREE", 00:30:34.263 "validity": 0.0 00:30:34.263 }, 00:30:34.263 { 00:30:34.263 "id": 5, 00:30:34.263 "state": "FREE", 00:30:34.263 "validity": 0.0 00:30:34.263 }, 00:30:34.263 { 00:30:34.263 "id": 6, 00:30:34.263 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 7, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 8, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 9, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 10, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 11, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 12, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 13, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 14, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 15, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 16, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 17, 00:30:34.264 "state": "FREE", 00:30:34.264 "validity": 0.0 00:30:34.264 } 00:30:34.264 ], 00:30:34.264 "read-only": true 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "name": "cache_device", 00:30:34.264 "type": "bdev", 00:30:34.264 "chunks": [ 00:30:34.264 { 00:30:34.264 "id": 0, 00:30:34.264 "state": "INACTIVE", 00:30:34.264 "utilization": 0.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 1, 00:30:34.264 "state": "CLOSED", 00:30:34.264 "utilization": 1.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 2, 00:30:34.264 "state": "CLOSED", 00:30:34.264 "utilization": 1.0 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 3, 00:30:34.264 "state": "OPEN", 00:30:34.264 "utilization": 0.001953125 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "id": 4, 00:30:34.264 "state": "OPEN", 00:30:34.264 "utilization": 0.0 00:30:34.264 } 00:30:34.264 ], 00:30:34.264 "read-only": true 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "name": "verbose_mode", 00:30:34.264 "value": true, 00:30:34.264 "unit": "", 00:30:34.264 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:34.264 }, 00:30:34.264 { 00:30:34.264 "name": "prep_upgrade_on_shutdown", 00:30:34.264 "value": true, 00:30:34.264 "unit": "", 00:30:34.264 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:34.264 } 00:30:34.264 ] 00:30:34.264 } 00:30:34.264 14:23:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:34.264 14:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82912 ]] 00:30:34.264 14:23:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82912 00:30:34.264 14:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82912 ']' 00:30:34.264 14:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82912 00:30:34.264 14:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:34.522 14:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.522 14:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82912 00:30:34.522 14:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:34.522 14:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:34.522 14:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82912' 00:30:34.522 killing process with pid 82912 00:30:34.522 14:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82912 00:30:34.522 14:23:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82912 00:30:35.091 [2024-12-09 14:23:36.606065] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:35.091 [2024-12-09 14:23:36.616834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.091 [2024-12-09 14:23:36.616864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:35.091 [2024-12-09 14:23:36.616874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:35.091 [2024-12-09 14:23:36.616881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.091 [2024-12-09 14:23:36.616899] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:35.091 [2024-12-09 14:23:36.618944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.091 [2024-12-09 14:23:36.618961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:35.091 [2024-12-09 14:23:36.618969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.035 ms 00:30:35.091 [2024-12-09 14:23:36.618975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.765214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.765273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:45.111 [2024-12-09 14:23:45.765287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9146.183 ms 00:30:45.111 [2024-12-09 14:23:45.765300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.767082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.767110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:45.111 [2024-12-09 14:23:45.767121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.767 ms 00:30:45.111 [2024-12-09 14:23:45.767130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.768248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.768268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:45.111 [2024-12-09 14:23:45.768278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.091 ms 00:30:45.111 [2024-12-09 14:23:45.768287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.778678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.778708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:45.111 [2024-12-09 14:23:45.778719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.355 ms 00:30:45.111 [2024-12-09 14:23:45.778728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.785914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.785944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:45.111 [2024-12-09 14:23:45.785955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.155 ms 00:30:45.111 [2024-12-09 14:23:45.785962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.786049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.786060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:45.111 [2024-12-09 14:23:45.786072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:30:45.111 [2024-12-09 14:23:45.786080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.795969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.795996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:45.111 [2024-12-09 14:23:45.796005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.874 ms 00:30:45.111 [2024-12-09 14:23:45.796013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.805952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.805978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:45.111 [2024-12-09 14:23:45.805987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.911 ms 00:30:45.111 [2024-12-09 14:23:45.805995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.815421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.815448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:45.111 [2024-12-09 14:23:45.815458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.396 ms 00:30:45.111 [2024-12-09 14:23:45.815465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.825008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.825136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:45.111 [2024-12-09 14:23:45.825151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.486 ms 00:30:45.111 [2024-12-09 14:23:45.825158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.825184] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:45.111 [2024-12-09 14:23:45.825206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:45.111 [2024-12-09 14:23:45.825215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:45.111 [2024-12-09 14:23:45.825223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:45.111 [2024-12-09 14:23:45.825231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:45.111 [2024-12-09 14:23:45.825342] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:45.111 [2024-12-09 14:23:45.825350] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 173ebc36-92e0-4d65-9c6e-5117fb3df056 00:30:45.111 [2024-12-09 14:23:45.825357] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:45.111 [2024-12-09 14:23:45.825364] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:45.111 [2024-12-09 14:23:45.825371] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:45.111 [2024-12-09 14:23:45.825378] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:45.111 [2024-12-09 14:23:45.825385] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:45.111 [2024-12-09 14:23:45.825395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:45.111 [2024-12-09 14:23:45.825402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:45.111 [2024-12-09 14:23:45.825408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:45.111 [2024-12-09 14:23:45.825416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:45.111 [2024-12-09 14:23:45.825424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.825434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:45.111 [2024-12-09 14:23:45.825442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.240 ms 00:30:45.111 [2024-12-09 14:23:45.825450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.837812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.837840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:45.111 [2024-12-09 14:23:45.837851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.347 ms 00:30:45.111 [2024-12-09 14:23:45.837863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.838196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.111 [2024-12-09 14:23:45.838205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:45.111 [2024-12-09 14:23:45.838212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.316 ms 00:30:45.111 [2024-12-09 14:23:45.838219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.879383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.111 [2024-12-09 14:23:45.879411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:45.111 [2024-12-09 14:23:45.879426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.111 [2024-12-09 14:23:45.879435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.879462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.111 [2024-12-09 14:23:45.879469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:45.111 [2024-12-09 14:23:45.879477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.111 [2024-12-09 14:23:45.879484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.111 [2024-12-09 14:23:45.879561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.112 [2024-12-09 14:23:45.879571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:45.112 [2024-12-09 14:23:45.879579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.112 [2024-12-09 14:23:45.879590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.112 [2024-12-09 14:23:45.879606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.112 [2024-12-09 14:23:45.879613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:45.112 [2024-12-09 14:23:45.879621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.112 [2024-12-09 14:23:45.879629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.112 [2024-12-09 14:23:45.954981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.112 [2024-12-09 14:23:45.955145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:45.112 [2024-12-09 14:23:45.955163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.112 [2024-12-09 14:23:45.955176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.112 [2024-12-09 14:23:46.016905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.112 [2024-12-09 14:23:46.016941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:45.112 [2024-12-09 14:23:46.016952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.112 [2024-12-09 14:23:46.016960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.112 [2024-12-09 14:23:46.017042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.112 [2024-12-09 14:23:46.017052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:45.112 [2024-12-09 14:23:46.017060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.112 [2024-12-09 14:23:46.017067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.112 [2024-12-09 14:23:46.017118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.112 [2024-12-09 14:23:46.017127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:45.112 [2024-12-09 14:23:46.017136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.112 [2024-12-09 14:23:46.017143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.112 [2024-12-09 14:23:46.017232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.112 [2024-12-09 14:23:46.017241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:45.112 [2024-12-09 14:23:46.017248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.112 [2024-12-09 14:23:46.017255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.112 [2024-12-09 14:23:46.017284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.112 [2024-12-09 14:23:46.017296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:45.112 [2024-12-09 14:23:46.017303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.112 [2024-12-09 14:23:46.017310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.112 [2024-12-09 14:23:46.017345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.112 [2024-12-09 14:23:46.017354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:45.112 [2024-12-09 14:23:46.017361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.112 [2024-12-09 14:23:46.017369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.112 [2024-12-09 14:23:46.017413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:45.112 [2024-12-09 14:23:46.017422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:45.112 [2024-12-09 14:23:46.017430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:45.112 [2024-12-09 14:23:46.017437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.112 [2024-12-09 14:23:46.017563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9400.656 ms, result 0 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83444 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83444 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83444 ']' 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.400 14:23:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:50.400 [2024-12-09 14:23:51.504726] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:50.400 [2024-12-09 14:23:51.504843] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83444 ] 00:30:50.400 [2024-12-09 14:23:51.661617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.400 [2024-12-09 14:23:51.761029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.661 [2024-12-09 14:23:52.452266] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:50.661 [2024-12-09 14:23:52.452333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:50.924 [2024-12-09 14:23:52.600449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.600490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:50.924 [2024-12-09 14:23:52.600503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:50.924 [2024-12-09 14:23:52.600510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.600578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.600589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:50.924 [2024-12-09 14:23:52.600597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:30:50.924 [2024-12-09 14:23:52.600605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.600630] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:50.924 [2024-12-09 14:23:52.601362] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:50.924 [2024-12-09 14:23:52.601383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.601390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:50.924 [2024-12-09 14:23:52.601398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.764 ms 00:30:50.924 [2024-12-09 14:23:52.601405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.602433] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:50.924 [2024-12-09 14:23:52.614891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.614924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:50.924 [2024-12-09 14:23:52.614939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.460 ms 00:30:50.924 [2024-12-09 14:23:52.614946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.614997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.615006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:50.924 [2024-12-09 14:23:52.615014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:50.924 [2024-12-09 14:23:52.615021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.619726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.619752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:50.924 [2024-12-09 14:23:52.619761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.653 ms 00:30:50.924 [2024-12-09 14:23:52.619768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.619869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.619877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:50.924 [2024-12-09 14:23:52.619885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:30:50.924 [2024-12-09 14:23:52.619892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.619945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.619957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:50.924 [2024-12-09 14:23:52.619965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:50.924 [2024-12-09 14:23:52.619973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.619995] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:50.924 [2024-12-09 14:23:52.623349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.623373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:50.924 [2024-12-09 14:23:52.623382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.358 ms 00:30:50.924 [2024-12-09 14:23:52.623392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.623416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.623425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:50.924 [2024-12-09 14:23:52.623432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:50.924 [2024-12-09 14:23:52.623439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.623459] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:50.924 [2024-12-09 14:23:52.623479] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:50.924 [2024-12-09 14:23:52.623512] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:50.924 [2024-12-09 14:23:52.623526] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:50.924 [2024-12-09 14:23:52.623644] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:50.924 [2024-12-09 14:23:52.623655] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:50.924 [2024-12-09 14:23:52.623665] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:50.924 [2024-12-09 14:23:52.623692] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:50.924 [2024-12-09 14:23:52.623701] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:50.924 [2024-12-09 14:23:52.623711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:50.924 [2024-12-09 14:23:52.623718] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:50.924 [2024-12-09 14:23:52.623725] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:50.924 [2024-12-09 14:23:52.623732] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:50.924 [2024-12-09 14:23:52.623739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.623746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:50.924 [2024-12-09 14:23:52.623754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.282 ms 00:30:50.924 [2024-12-09 14:23:52.623760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.623844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.924 [2024-12-09 14:23:52.623852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:50.924 [2024-12-09 14:23:52.623861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:50.924 [2024-12-09 14:23:52.623868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.924 [2024-12-09 14:23:52.623978] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:50.924 [2024-12-09 14:23:52.623988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:50.924 [2024-12-09 14:23:52.623996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:50.924 [2024-12-09 14:23:52.624003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.924 [2024-12-09 14:23:52.624010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:50.924 [2024-12-09 14:23:52.624016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:50.924 [2024-12-09 14:23:52.624023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:50.924 [2024-12-09 14:23:52.624030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:50.924 [2024-12-09 14:23:52.624037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:50.925 [2024-12-09 14:23:52.624043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.925 [2024-12-09 14:23:52.624049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:50.925 [2024-12-09 14:23:52.624055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:50.925 [2024-12-09 14:23:52.624062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.925 [2024-12-09 14:23:52.624069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:50.925 [2024-12-09 14:23:52.624076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:50.925 [2024-12-09 14:23:52.624082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.925 [2024-12-09 14:23:52.624089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:50.925 [2024-12-09 14:23:52.624096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:50.925 [2024-12-09 14:23:52.624102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.925 [2024-12-09 14:23:52.624109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:50.925 [2024-12-09 14:23:52.624115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:50.925 [2024-12-09 14:23:52.624122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:50.925 [2024-12-09 14:23:52.624128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:50.925 [2024-12-09 14:23:52.624140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:50.925 [2024-12-09 14:23:52.624147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:50.925 [2024-12-09 14:23:52.624153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:50.925 [2024-12-09 14:23:52.624160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:50.925 [2024-12-09 14:23:52.624166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:50.925 [2024-12-09 14:23:52.624173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:50.925 [2024-12-09 14:23:52.624179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:50.925 [2024-12-09 14:23:52.624185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:50.925 [2024-12-09 14:23:52.624191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:50.925 [2024-12-09 14:23:52.624198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:50.925 [2024-12-09 14:23:52.624204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.925 [2024-12-09 14:23:52.624210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:50.925 [2024-12-09 14:23:52.624217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:50.925 [2024-12-09 14:23:52.624223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.925 [2024-12-09 14:23:52.624229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:50.925 [2024-12-09 14:23:52.624235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:50.925 [2024-12-09 14:23:52.624242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.925 [2024-12-09 14:23:52.624248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:50.925 [2024-12-09 14:23:52.624254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:50.925 [2024-12-09 14:23:52.624260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.925 [2024-12-09 14:23:52.624266] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:50.925 [2024-12-09 14:23:52.624275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:50.925 [2024-12-09 14:23:52.624281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:50.925 [2024-12-09 14:23:52.624290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:50.925 [2024-12-09 14:23:52.624299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:50.925 [2024-12-09 14:23:52.624306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:50.925 [2024-12-09 14:23:52.624312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:50.925 [2024-12-09 14:23:52.624319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:50.925 [2024-12-09 14:23:52.624325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:50.925 [2024-12-09 14:23:52.624332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:50.925 [2024-12-09 14:23:52.624340] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:50.925 [2024-12-09 14:23:52.624349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:50.925 [2024-12-09 14:23:52.624364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:50.925 [2024-12-09 14:23:52.624385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:50.925 [2024-12-09 14:23:52.624393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:50.925 [2024-12-09 14:23:52.624400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:50.925 [2024-12-09 14:23:52.624406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:50.925 [2024-12-09 14:23:52.624456] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:50.925 [2024-12-09 14:23:52.624464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:50.925 [2024-12-09 14:23:52.624479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:50.925 [2024-12-09 14:23:52.624486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:50.925 [2024-12-09 14:23:52.624493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:50.925 [2024-12-09 14:23:52.624500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.925 [2024-12-09 14:23:52.624507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:50.925 [2024-12-09 14:23:52.624514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.591 ms 00:30:50.925 [2024-12-09 14:23:52.624521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.925 [2024-12-09 14:23:52.624569] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:50.925 [2024-12-09 14:23:52.624579] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:54.218 [2024-12-09 14:23:56.006308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.218 [2024-12-09 14:23:56.006366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:54.218 [2024-12-09 14:23:56.006381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3381.725 ms 00:30:54.218 [2024-12-09 14:23:56.006390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.480 [2024-12-09 14:23:56.031583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.480 [2024-12-09 14:23:56.031621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:54.480 [2024-12-09 14:23:56.031633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.975 ms 00:30:54.480 [2024-12-09 14:23:56.031641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.480 [2024-12-09 14:23:56.031716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.480 [2024-12-09 14:23:56.031731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:54.480 [2024-12-09 14:23:56.031740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:54.480 [2024-12-09 14:23:56.031747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.480 [2024-12-09 14:23:56.061976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.480 [2024-12-09 14:23:56.062125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:54.480 [2024-12-09 14:23:56.062149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.192 ms 00:30:54.480 [2024-12-09 14:23:56.062156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.480 [2024-12-09 14:23:56.062186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.480 [2024-12-09 14:23:56.062193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:54.480 [2024-12-09 14:23:56.062201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:54.480 [2024-12-09 14:23:56.062209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.480 [2024-12-09 14:23:56.062580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.480 [2024-12-09 14:23:56.062596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:54.480 [2024-12-09 14:23:56.062605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.322 ms 00:30:54.480 [2024-12-09 14:23:56.062612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.480 [2024-12-09 14:23:56.062653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.480 [2024-12-09 14:23:56.062661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:54.480 [2024-12-09 14:23:56.062669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:54.480 [2024-12-09 14:23:56.062677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.480 [2024-12-09 14:23:56.076733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.480 [2024-12-09 14:23:56.076847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:54.480 [2024-12-09 14:23:56.076862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.034 ms 00:30:54.480 [2024-12-09 14:23:56.076871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.101759] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:54.481 [2024-12-09 14:23:56.101805] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:54.481 [2024-12-09 14:23:56.101822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.101832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:54.481 [2024-12-09 14:23:56.101844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.855 ms 00:30:54.481 [2024-12-09 14:23:56.101853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.116371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.116405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:54.481 [2024-12-09 14:23:56.116415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.471 ms 00:30:54.481 [2024-12-09 14:23:56.116423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.128055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.128086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:54.481 [2024-12-09 14:23:56.128096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.595 ms 00:30:54.481 [2024-12-09 14:23:56.128102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.139658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.139687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:54.481 [2024-12-09 14:23:56.139697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.523 ms 00:30:54.481 [2024-12-09 14:23:56.139703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.140303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.140321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:54.481 [2024-12-09 14:23:56.140330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:30:54.481 [2024-12-09 14:23:56.140337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.195054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.195237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:54.481 [2024-12-09 14:23:56.195256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.699 ms 00:30:54.481 [2024-12-09 14:23:56.195264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.206098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:54.481 [2024-12-09 14:23:56.206886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.206916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:54.481 [2024-12-09 14:23:56.206927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.374 ms 00:30:54.481 [2024-12-09 14:23:56.206935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.207012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.207026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:54.481 [2024-12-09 14:23:56.207035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:54.481 [2024-12-09 14:23:56.207042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.207107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.207118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:54.481 [2024-12-09 14:23:56.207126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:54.481 [2024-12-09 14:23:56.207134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.207154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.207162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:54.481 [2024-12-09 14:23:56.207173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:54.481 [2024-12-09 14:23:56.207180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.207210] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:54.481 [2024-12-09 14:23:56.207220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.207228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:54.481 [2024-12-09 14:23:56.207235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:54.481 [2024-12-09 14:23:56.207242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.229956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.229993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:54.481 [2024-12-09 14:23:56.230003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.694 ms 00:30:54.481 [2024-12-09 14:23:56.230011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.230082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.481 [2024-12-09 14:23:56.230091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:54.481 [2024-12-09 14:23:56.230099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:54.481 [2024-12-09 14:23:56.230106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.481 [2024-12-09 14:23:56.231120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3630.257 ms, result 0 00:30:54.481 [2024-12-09 14:23:56.246294] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.481 [2024-12-09 14:23:56.262290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:54.481 [2024-12-09 14:23:56.270407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:54.742 14:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.742 14:23:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:54.742 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:54.742 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:54.742 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:54.742 [2024-12-09 14:23:56.494446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.742 [2024-12-09 14:23:56.494484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:54.742 [2024-12-09 14:23:56.494499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:54.742 [2024-12-09 14:23:56.494507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.742 [2024-12-09 14:23:56.494529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.742 [2024-12-09 14:23:56.494553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:54.742 [2024-12-09 14:23:56.494562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:54.742 [2024-12-09 14:23:56.494569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.742 [2024-12-09 14:23:56.494588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.742 [2024-12-09 14:23:56.494596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:54.742 [2024-12-09 14:23:56.494604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:54.742 [2024-12-09 14:23:56.494611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.742 [2024-12-09 14:23:56.494668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.210 ms, result 0 00:30:54.742 true 00:30:54.742 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:55.020 { 00:30:55.020 "name": "ftl", 00:30:55.020 "properties": [ 00:30:55.020 { 00:30:55.020 "name": "superblock_version", 00:30:55.020 "value": 5, 00:30:55.020 "read-only": true 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "name": "base_device", 00:30:55.020 "bands": [ 00:30:55.020 { 00:30:55.020 "id": 0, 00:30:55.020 "state": "CLOSED", 00:30:55.020 "validity": 1.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 1, 00:30:55.020 "state": "CLOSED", 00:30:55.020 "validity": 1.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 2, 00:30:55.020 "state": "CLOSED", 00:30:55.020 "validity": 0.007843137254901933 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 3, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 4, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 5, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 6, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 7, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 8, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 9, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 10, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 11, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 12, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 13, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 14, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 15, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 16, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 17, 00:30:55.020 "state": "FREE", 00:30:55.020 "validity": 0.0 00:30:55.020 } 00:30:55.020 ], 00:30:55.020 "read-only": true 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "name": "cache_device", 00:30:55.020 "type": "bdev", 00:30:55.020 "chunks": [ 00:30:55.020 { 00:30:55.020 "id": 0, 00:30:55.020 "state": "INACTIVE", 00:30:55.020 "utilization": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 1, 00:30:55.020 "state": "OPEN", 00:30:55.020 "utilization": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 2, 00:30:55.020 "state": "OPEN", 00:30:55.020 "utilization": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 3, 00:30:55.020 "state": "FREE", 00:30:55.020 "utilization": 0.0 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "id": 4, 00:30:55.020 "state": "FREE", 00:30:55.020 "utilization": 0.0 00:30:55.020 } 00:30:55.020 ], 00:30:55.020 "read-only": true 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "name": "verbose_mode", 00:30:55.020 "value": true, 00:30:55.020 "unit": "", 00:30:55.020 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:55.020 }, 00:30:55.020 { 00:30:55.020 "name": "prep_upgrade_on_shutdown", 00:30:55.020 "value": false, 00:30:55.020 "unit": "", 00:30:55.020 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:55.020 } 00:30:55.020 ] 00:30:55.020 } 00:30:55.020 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:55.020 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:55.020 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:55.298 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:55.298 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:55.298 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:55.298 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:55.298 14:23:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:55.558 Validate MD5 checksum, iteration 1 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:55.558 14:23:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:55.558 [2024-12-09 14:23:57.201730] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:55.558 [2024-12-09 14:23:57.201982] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83524 ] 00:30:55.819 [2024-12-09 14:23:57.361728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.819 [2024-12-09 14:23:57.457072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.207  [2024-12-09T14:23:59.948Z] Copying: 575/1024 [MB] (575 MBps) [2024-12-09T14:24:01.332Z] Copying: 1024/1024 [MB] (average 569 MBps) 00:30:59.538 00:30:59.538 14:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:59.538 14:24:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b5a62c44439cff340a8c5311cf4d6e4c 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b5a62c44439cff340a8c5311cf4d6e4c != \b\5\a\6\2\c\4\4\4\3\9\c\f\f\3\4\0\a\8\c\5\3\1\1\c\f\4\d\6\e\4\c ]] 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:02.076 Validate MD5 checksum, iteration 2 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:02.076 14:24:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:02.076 [2024-12-09 14:24:03.393708] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:31:02.076 [2024-12-09 14:24:03.393818] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83593 ] 00:31:02.076 [2024-12-09 14:24:03.553830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.076 [2024-12-09 14:24:03.647175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.466  [2024-12-09T14:24:06.199Z] Copying: 501/1024 [MB] (501 MBps) [2024-12-09T14:24:07.134Z] Copying: 1024/1024 [MB] (average 517 MBps) 00:31:05.340 00:31:05.340 14:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:05.340 14:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=def51fc230c37f884a1e21c56b2b0d14 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ def51fc230c37f884a1e21c56b2b0d14 != \d\e\f\5\1\f\c\2\3\0\c\3\7\f\8\8\4\a\1\e\2\1\c\5\6\b\2\b\0\d\1\4 ]] 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83444 ]] 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83444 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83660 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83660 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83660 ']' 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:07.878 14:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:07.878 [2024-12-09 14:24:09.145712] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:31:07.878 [2024-12-09 14:24:09.145997] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83660 ] 00:31:07.878 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83444 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:07.878 [2024-12-09 14:24:09.300992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.878 [2024-12-09 14:24:09.374988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.445 [2024-12-09 14:24:09.945931] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:08.445 [2024-12-09 14:24:09.945979] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:08.445 [2024-12-09 14:24:10.088838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.088870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:08.445 [2024-12-09 14:24:10.088879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:08.445 [2024-12-09 14:24:10.088886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.088926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.088934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:08.445 [2024-12-09 14:24:10.088940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:31:08.445 [2024-12-09 14:24:10.088946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.088963] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:08.445 [2024-12-09 14:24:10.089470] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:08.445 [2024-12-09 14:24:10.089482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.089488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:08.445 [2024-12-09 14:24:10.089495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:31:08.445 [2024-12-09 14:24:10.089500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.089747] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:08.445 [2024-12-09 14:24:10.101982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.102012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:08.445 [2024-12-09 14:24:10.102022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.235 ms 00:31:08.445 [2024-12-09 14:24:10.102029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.108878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.109001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:08.445 [2024-12-09 14:24:10.109014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:08.445 [2024-12-09 14:24:10.109020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.109277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.109287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:08.445 [2024-12-09 14:24:10.109293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:31:08.445 [2024-12-09 14:24:10.109299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.109338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.109345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:08.445 [2024-12-09 14:24:10.109351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:08.445 [2024-12-09 14:24:10.109357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.109375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.109382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:08.445 [2024-12-09 14:24:10.109388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:08.445 [2024-12-09 14:24:10.109394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.109410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:08.445 [2024-12-09 14:24:10.111656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.111679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:08.445 [2024-12-09 14:24:10.111687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.250 ms 00:31:08.445 [2024-12-09 14:24:10.111692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.111714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.111721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:08.445 [2024-12-09 14:24:10.111727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:08.445 [2024-12-09 14:24:10.111733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.111749] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:08.445 [2024-12-09 14:24:10.111764] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:08.445 [2024-12-09 14:24:10.111790] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:08.445 [2024-12-09 14:24:10.111803] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:08.445 [2024-12-09 14:24:10.111887] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:08.445 [2024-12-09 14:24:10.111895] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:08.445 [2024-12-09 14:24:10.111903] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:08.445 [2024-12-09 14:24:10.111911] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:08.445 [2024-12-09 14:24:10.111917] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:08.445 [2024-12-09 14:24:10.111924] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:08.445 [2024-12-09 14:24:10.111930] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:08.445 [2024-12-09 14:24:10.111936] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:08.445 [2024-12-09 14:24:10.111941] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:08.445 [2024-12-09 14:24:10.111949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.111955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:08.445 [2024-12-09 14:24:10.111961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:31:08.445 [2024-12-09 14:24:10.111966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.112032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.445 [2024-12-09 14:24:10.112038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:08.445 [2024-12-09 14:24:10.112044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:31:08.445 [2024-12-09 14:24:10.112049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.445 [2024-12-09 14:24:10.112129] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:08.445 [2024-12-09 14:24:10.112138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:08.445 [2024-12-09 14:24:10.112145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:08.445 [2024-12-09 14:24:10.112151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:08.446 [2024-12-09 14:24:10.112162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:08.446 [2024-12-09 14:24:10.112173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:08.446 [2024-12-09 14:24:10.112179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:08.446 [2024-12-09 14:24:10.112185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:08.446 [2024-12-09 14:24:10.112195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:08.446 [2024-12-09 14:24:10.112204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:08.446 [2024-12-09 14:24:10.112215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:08.446 [2024-12-09 14:24:10.112220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:08.446 [2024-12-09 14:24:10.112230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:08.446 [2024-12-09 14:24:10.112235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:08.446 [2024-12-09 14:24:10.112245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:08.446 [2024-12-09 14:24:10.112255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.446 [2024-12-09 14:24:10.112260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:08.446 [2024-12-09 14:24:10.112265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:08.446 [2024-12-09 14:24:10.112270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.446 [2024-12-09 14:24:10.112275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:08.446 [2024-12-09 14:24:10.112280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:08.446 [2024-12-09 14:24:10.112285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.446 [2024-12-09 14:24:10.112290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:08.446 [2024-12-09 14:24:10.112296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:08.446 [2024-12-09 14:24:10.112301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.446 [2024-12-09 14:24:10.112306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:08.446 [2024-12-09 14:24:10.112311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:08.446 [2024-12-09 14:24:10.112315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:08.446 [2024-12-09 14:24:10.112325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:08.446 [2024-12-09 14:24:10.112330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:08.446 [2024-12-09 14:24:10.112340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:08.446 [2024-12-09 14:24:10.112355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:08.446 [2024-12-09 14:24:10.112360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112365] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:08.446 [2024-12-09 14:24:10.112373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:08.446 [2024-12-09 14:24:10.112378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:08.446 [2024-12-09 14:24:10.112384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.446 [2024-12-09 14:24:10.112390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:08.446 [2024-12-09 14:24:10.112395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:08.446 [2024-12-09 14:24:10.112400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:08.446 [2024-12-09 14:24:10.112405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:08.446 [2024-12-09 14:24:10.112410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:08.446 [2024-12-09 14:24:10.112415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:08.446 [2024-12-09 14:24:10.112422] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:08.446 [2024-12-09 14:24:10.112429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:08.446 [2024-12-09 14:24:10.112441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:08.446 [2024-12-09 14:24:10.112457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:08.446 [2024-12-09 14:24:10.112462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:08.446 [2024-12-09 14:24:10.112468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:08.446 [2024-12-09 14:24:10.112473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:08.446 [2024-12-09 14:24:10.112511] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:08.446 [2024-12-09 14:24:10.112517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:08.446 [2024-12-09 14:24:10.112531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:08.446 [2024-12-09 14:24:10.112751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:08.446 [2024-12-09 14:24:10.112791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:08.446 [2024-12-09 14:24:10.112814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.446 [2024-12-09 14:24:10.112833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:08.446 [2024-12-09 14:24:10.112879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.742 ms 00:31:08.446 [2024-12-09 14:24:10.112896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.446 [2024-12-09 14:24:10.131698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.446 [2024-12-09 14:24:10.131788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:08.446 [2024-12-09 14:24:10.131799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.736 ms 00:31:08.446 [2024-12-09 14:24:10.131805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.446 [2024-12-09 14:24:10.131833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.446 [2024-12-09 14:24:10.131840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:08.446 [2024-12-09 14:24:10.131846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:08.446 [2024-12-09 14:24:10.131852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.446 [2024-12-09 14:24:10.155438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.446 [2024-12-09 14:24:10.155524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:08.446 [2024-12-09 14:24:10.155548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.548 ms 00:31:08.446 [2024-12-09 14:24:10.155555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.446 [2024-12-09 14:24:10.155575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.446 [2024-12-09 14:24:10.155582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:08.446 [2024-12-09 14:24:10.155589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:08.446 [2024-12-09 14:24:10.155597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.446 [2024-12-09 14:24:10.155663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.446 [2024-12-09 14:24:10.155671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:08.446 [2024-12-09 14:24:10.155677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:08.446 [2024-12-09 14:24:10.155683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.446 [2024-12-09 14:24:10.155713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.446 [2024-12-09 14:24:10.155719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:08.446 [2024-12-09 14:24:10.155725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:08.446 [2024-12-09 14:24:10.155731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.446 [2024-12-09 14:24:10.167008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.446 [2024-12-09 14:24:10.167034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:08.447 [2024-12-09 14:24:10.167043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.257 ms 00:31:08.447 [2024-12-09 14:24:10.167049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.447 [2024-12-09 14:24:10.167122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.447 [2024-12-09 14:24:10.167131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:08.447 [2024-12-09 14:24:10.167137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:08.447 [2024-12-09 14:24:10.167143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.447 [2024-12-09 14:24:10.191113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.447 [2024-12-09 14:24:10.191143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:08.447 [2024-12-09 14:24:10.191153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.955 ms 00:31:08.447 [2024-12-09 14:24:10.191160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.447 [2024-12-09 14:24:10.198187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.447 [2024-12-09 14:24:10.198289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:08.447 [2024-12-09 14:24:10.198310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.377 ms 00:31:08.447 [2024-12-09 14:24:10.198316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.706 [2024-12-09 14:24:10.240814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.706 [2024-12-09 14:24:10.240958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:08.706 [2024-12-09 14:24:10.240971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.455 ms 00:31:08.706 [2024-12-09 14:24:10.240978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.706 [2024-12-09 14:24:10.241075] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:08.706 [2024-12-09 14:24:10.241157] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:08.706 [2024-12-09 14:24:10.241228] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:08.706 [2024-12-09 14:24:10.241296] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:08.706 [2024-12-09 14:24:10.241304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.706 [2024-12-09 14:24:10.241310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:08.706 [2024-12-09 14:24:10.241317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.296 ms 00:31:08.706 [2024-12-09 14:24:10.241322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.706 [2024-12-09 14:24:10.241363] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:08.706 [2024-12-09 14:24:10.241372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.706 [2024-12-09 14:24:10.241381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:08.706 [2024-12-09 14:24:10.241388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:08.706 [2024-12-09 14:24:10.241394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.706 [2024-12-09 14:24:10.252357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.706 [2024-12-09 14:24:10.252457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:08.706 [2024-12-09 14:24:10.252470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.946 ms 00:31:08.706 [2024-12-09 14:24:10.252477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.706 [2024-12-09 14:24:10.258893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.706 [2024-12-09 14:24:10.258973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:08.706 [2024-12-09 14:24:10.258985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:08.706 [2024-12-09 14:24:10.258991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.706 [2024-12-09 14:24:10.259058] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:08.706 [2024-12-09 14:24:10.259165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.706 [2024-12-09 14:24:10.259174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:08.706 [2024-12-09 14:24:10.259181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.109 ms 00:31:08.706 [2024-12-09 14:24:10.259186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.965 [2024-12-09 14:24:10.738852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.965 [2024-12-09 14:24:10.738919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:08.965 [2024-12-09 14:24:10.738934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 479.058 ms 00:31:08.965 [2024-12-09 14:24:10.738943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.965 [2024-12-09 14:24:10.743570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.965 [2024-12-09 14:24:10.743606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:08.965 [2024-12-09 14:24:10.743617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.546 ms 00:31:08.965 [2024-12-09 14:24:10.743625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.965 [2024-12-09 14:24:10.744644] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:08.965 [2024-12-09 14:24:10.744678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.965 [2024-12-09 14:24:10.744688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:08.965 [2024-12-09 14:24:10.744699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.020 ms 00:31:08.965 [2024-12-09 14:24:10.744714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.965 [2024-12-09 14:24:10.744748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.965 [2024-12-09 14:24:10.744758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:08.965 [2024-12-09 14:24:10.744768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:08.965 [2024-12-09 14:24:10.744779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.965 [2024-12-09 14:24:10.744814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 485.751 ms, result 0 00:31:08.965 [2024-12-09 14:24:10.744852] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:08.965 [2024-12-09 14:24:10.744919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.965 [2024-12-09 14:24:10.744929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:08.965 [2024-12-09 14:24:10.744937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:31:08.965 [2024-12-09 14:24:10.744944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.347507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.347594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:09.964 [2024-12-09 14:24:11.347625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 601.566 ms 00:31:09.964 [2024-12-09 14:24:11.347634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.351968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.352014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:09.964 [2024-12-09 14:24:11.352027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.204 ms 00:31:09.964 [2024-12-09 14:24:11.352036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.352459] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:09.964 [2024-12-09 14:24:11.352493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.352502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:09.964 [2024-12-09 14:24:11.352513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.427 ms 00:31:09.964 [2024-12-09 14:24:11.352521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.352578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.352588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:09.964 [2024-12-09 14:24:11.352597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:09.964 [2024-12-09 14:24:11.352605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.352642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 607.782 ms, result 0 00:31:09.964 [2024-12-09 14:24:11.352688] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:09.964 [2024-12-09 14:24:11.352699] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:09.964 [2024-12-09 14:24:11.352709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.352717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:09.964 [2024-12-09 14:24:11.352726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1093.666 ms 00:31:09.964 [2024-12-09 14:24:11.352734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.352765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.352779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:09.964 [2024-12-09 14:24:11.352787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:09.964 [2024-12-09 14:24:11.352794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.365225] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:09.964 [2024-12-09 14:24:11.365357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.365368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:09.964 [2024-12-09 14:24:11.365379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.546 ms 00:31:09.964 [2024-12-09 14:24:11.365388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.366111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.366135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:09.964 [2024-12-09 14:24:11.366150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.643 ms 00:31:09.964 [2024-12-09 14:24:11.366158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.368418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.368440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:09.964 [2024-12-09 14:24:11.368450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.242 ms 00:31:09.964 [2024-12-09 14:24:11.368459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.368503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.368514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:09.964 [2024-12-09 14:24:11.368523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:09.964 [2024-12-09 14:24:11.368544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.368656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.368667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:09.964 [2024-12-09 14:24:11.368675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:09.964 [2024-12-09 14:24:11.368683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.964 [2024-12-09 14:24:11.368704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.964 [2024-12-09 14:24:11.368712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:09.964 [2024-12-09 14:24:11.368722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:09.964 [2024-12-09 14:24:11.368729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.965 [2024-12-09 14:24:11.368766] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:09.965 [2024-12-09 14:24:11.368777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.965 [2024-12-09 14:24:11.368785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:09.965 [2024-12-09 14:24:11.368793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:09.965 [2024-12-09 14:24:11.368801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.965 [2024-12-09 14:24:11.368850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:09.965 [2024-12-09 14:24:11.368859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:09.965 [2024-12-09 14:24:11.368867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:09.965 [2024-12-09 14:24:11.368874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:09.965 [2024-12-09 14:24:11.370206] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1280.704 ms, result 0 00:31:09.965 [2024-12-09 14:24:11.385764] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.965 [2024-12-09 14:24:11.401773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:09.965 [2024-12-09 14:24:11.411060] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:09.965 Validate MD5 checksum, iteration 1 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:09.965 14:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:09.965 [2024-12-09 14:24:11.677912] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:31:09.965 [2024-12-09 14:24:11.678240] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83689 ] 00:31:10.225 [2024-12-09 14:24:11.835157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.225 [2024-12-09 14:24:11.956477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.138  [2024-12-09T14:24:14.503Z] Copying: 518/1024 [MB] (518 MBps) [2024-12-09T14:24:19.792Z] Copying: 1024/1024 [MB] (average 553 MBps) 00:31:17.998 00:31:17.998 14:24:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:17.998 14:24:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b5a62c44439cff340a8c5311cf4d6e4c 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b5a62c44439cff340a8c5311cf4d6e4c != \b\5\a\6\2\c\4\4\4\3\9\c\f\f\3\4\0\a\8\c\5\3\1\1\c\f\4\d\6\e\4\c ]] 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:19.369 Validate MD5 checksum, iteration 2 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:19.369 14:24:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:19.369 [2024-12-09 14:24:20.800911] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:31:19.369 [2024-12-09 14:24:20.801024] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83784 ] 00:31:19.369 [2024-12-09 14:24:20.960342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.369 [2024-12-09 14:24:21.055717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.274  [2024-12-09T14:24:23.068Z] Copying: 741/1024 [MB] (741 MBps) [2024-12-09T14:24:25.607Z] Copying: 1024/1024 [MB] (average 698 MBps) 00:31:23.813 00:31:23.813 14:24:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:23.813 14:24:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=def51fc230c37f884a1e21c56b2b0d14 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ def51fc230c37f884a1e21c56b2b0d14 != \d\e\f\5\1\f\c\2\3\0\c\3\7\f\8\8\4\a\1\e\2\1\c\5\6\b\2\b\0\d\1\4 ]] 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83660 ]] 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83660 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83660 ']' 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83660 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83660 00:31:25.710 killing process with pid 83660 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83660' 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83660 00:31:25.710 14:24:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83660 00:31:26.325 [2024-12-09 14:24:27.803634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:26.325 [2024-12-09 14:24:27.814819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.325 [2024-12-09 14:24:27.814852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:26.325 [2024-12-09 14:24:27.814862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:26.325 [2024-12-09 14:24:27.814869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.325 [2024-12-09 14:24:27.814885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:26.325 [2024-12-09 14:24:27.817005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.325 [2024-12-09 14:24:27.817029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:26.325 [2024-12-09 14:24:27.817040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.108 ms 00:31:26.325 [2024-12-09 14:24:27.817047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.325 [2024-12-09 14:24:27.817250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.325 [2024-12-09 14:24:27.817259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:26.325 [2024-12-09 14:24:27.817265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:31:26.325 [2024-12-09 14:24:27.817271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.325 [2024-12-09 14:24:27.818296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.325 [2024-12-09 14:24:27.818402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:26.325 [2024-12-09 14:24:27.818414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.012 ms 00:31:26.325 [2024-12-09 14:24:27.818424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.325 [2024-12-09 14:24:27.819300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.325 [2024-12-09 14:24:27.819313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:26.325 [2024-12-09 14:24:27.819321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.850 ms 00:31:26.325 [2024-12-09 14:24:27.819328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.325 [2024-12-09 14:24:27.826568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.325 [2024-12-09 14:24:27.826593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:26.325 [2024-12-09 14:24:27.826600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.212 ms 00:31:26.325 [2024-12-09 14:24:27.826610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.325 [2024-12-09 14:24:27.831002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.325 [2024-12-09 14:24:27.831026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:26.325 [2024-12-09 14:24:27.831034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.364 ms 00:31:26.325 [2024-12-09 14:24:27.831041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.325 [2024-12-09 14:24:27.831107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.325 [2024-12-09 14:24:27.831115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:26.325 [2024-12-09 14:24:27.831122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:31:26.325 [2024-12-09 14:24:27.831131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.325 [2024-12-09 14:24:27.838313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.325 [2024-12-09 14:24:27.838336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:26.325 [2024-12-09 14:24:27.838343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.169 ms 00:31:26.325 [2024-12-09 14:24:27.838348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.325 [2024-12-09 14:24:27.845589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.326 [2024-12-09 14:24:27.845612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:26.326 [2024-12-09 14:24:27.845618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.217 ms 00:31:26.326 [2024-12-09 14:24:27.845623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:27.852663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.326 [2024-12-09 14:24:27.852763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:26.326 [2024-12-09 14:24:27.852775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.016 ms 00:31:26.326 [2024-12-09 14:24:27.852780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:27.859837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.326 [2024-12-09 14:24:27.859930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:26.326 [2024-12-09 14:24:27.859941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.013 ms 00:31:26.326 [2024-12-09 14:24:27.859946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:27.859969] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:26.326 [2024-12-09 14:24:27.859979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:26.326 [2024-12-09 14:24:27.859987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:26.326 [2024-12-09 14:24:27.859993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:26.326 [2024-12-09 14:24:27.859999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:26.326 [2024-12-09 14:24:27.860085] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:26.326 [2024-12-09 14:24:27.860091] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 173ebc36-92e0-4d65-9c6e-5117fb3df056 00:31:26.326 [2024-12-09 14:24:27.860097] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:26.326 [2024-12-09 14:24:27.860102] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:26.326 [2024-12-09 14:24:27.860107] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:26.326 [2024-12-09 14:24:27.860113] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:26.326 [2024-12-09 14:24:27.860118] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:26.326 [2024-12-09 14:24:27.860124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:26.326 [2024-12-09 14:24:27.860134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:26.326 [2024-12-09 14:24:27.860138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:26.326 [2024-12-09 14:24:27.860143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:26.326 [2024-12-09 14:24:27.860148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.326 [2024-12-09 14:24:27.860155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:26.326 [2024-12-09 14:24:27.860162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:31:26.326 [2024-12-09 14:24:27.860168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:27.869814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.326 [2024-12-09 14:24:27.869837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:26.326 [2024-12-09 14:24:27.869845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.632 ms 00:31:26.326 [2024-12-09 14:24:27.869851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:27.870118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.326 [2024-12-09 14:24:27.870129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:26.326 [2024-12-09 14:24:27.870135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.250 ms 00:31:26.326 [2024-12-09 14:24:27.870140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:27.902920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:27.903020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:26.326 [2024-12-09 14:24:27.903031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:27.903037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:27.903063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:27.903070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:26.326 [2024-12-09 14:24:27.903076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:27.903081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:27.903134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:27.903142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:26.326 [2024-12-09 14:24:27.903148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:27.903154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:27.903170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:27.903176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:26.326 [2024-12-09 14:24:27.903182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:27.903188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:27.961720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:27.961839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:26.326 [2024-12-09 14:24:27.961851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:27.961857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:28.010746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:28.010779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:26.326 [2024-12-09 14:24:28.010789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:28.010795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:28.010860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:28.010868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:26.326 [2024-12-09 14:24:28.010875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:28.010881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:28.010911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:28.010926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:26.326 [2024-12-09 14:24:28.010932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:28.010938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:28.011010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:28.011017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:26.326 [2024-12-09 14:24:28.011023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:28.011029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:28.011051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:28.011059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:26.326 [2024-12-09 14:24:28.011067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:28.011073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:28.011101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:28.011108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:26.326 [2024-12-09 14:24:28.011114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:28.011120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:28.011150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.326 [2024-12-09 14:24:28.011159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:26.326 [2024-12-09 14:24:28.011165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.326 [2024-12-09 14:24:28.011171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.326 [2024-12-09 14:24:28.011259] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 196.418 ms, result 0 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:26.894 Remove shared memory files 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83444 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:26.894 14:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:26.894 ************************************ 00:31:26.894 END TEST ftl_upgrade_shutdown 00:31:26.894 ************************************ 00:31:26.895 00:31:26.895 real 1m26.056s 00:31:26.895 user 1m57.547s 00:31:26.895 sys 0m18.297s 00:31:26.895 14:24:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.895 14:24:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:27.153 14:24:28 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:27.153 14:24:28 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:27.153 14:24:28 ftl -- ftl/ftl.sh@14 -- # killprocess 75018 00:31:27.153 14:24:28 ftl -- common/autotest_common.sh@954 -- # '[' -z 75018 ']' 00:31:27.153 14:24:28 ftl -- common/autotest_common.sh@958 -- # kill -0 75018 00:31:27.153 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75018) - No such process 00:31:27.153 Process with pid 75018 is not found 00:31:27.153 14:24:28 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75018 is not found' 00:31:27.153 14:24:28 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:27.153 14:24:28 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83905 00:31:27.153 14:24:28 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83905 00:31:27.153 14:24:28 ftl -- common/autotest_common.sh@835 -- # '[' -z 83905 ']' 00:31:27.153 14:24:28 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.153 14:24:28 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.153 14:24:28 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.153 14:24:28 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.153 14:24:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:27.153 14:24:28 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:27.153 [2024-12-09 14:24:28.757317] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:31:27.153 [2024-12-09 14:24:28.757407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83905 ] 00:31:27.153 [2024-12-09 14:24:28.907120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.411 [2024-12-09 14:24:28.983619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.978 14:24:29 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.978 14:24:29 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:27.978 14:24:29 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:28.236 nvme0n1 00:31:28.236 14:24:29 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:28.236 14:24:29 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:28.236 14:24:29 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:28.495 14:24:30 ftl -- ftl/common.sh@28 -- # stores=d982e825-2778-4f37-8912-46a67c6ec904 00:31:28.495 14:24:30 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:28.495 14:24:30 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d982e825-2778-4f37-8912-46a67c6ec904 00:31:28.495 14:24:30 ftl -- ftl/ftl.sh@23 -- # killprocess 83905 00:31:28.495 14:24:30 ftl -- common/autotest_common.sh@954 -- # '[' -z 83905 ']' 00:31:28.495 14:24:30 ftl -- common/autotest_common.sh@958 -- # kill -0 83905 00:31:28.495 14:24:30 ftl -- common/autotest_common.sh@959 -- # uname 00:31:28.495 14:24:30 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:28.495 14:24:30 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83905 00:31:28.495 killing process with pid 83905 00:31:28.495 14:24:30 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:28.495 14:24:30 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:28.495 14:24:30 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83905' 00:31:28.495 14:24:30 ftl -- common/autotest_common.sh@973 -- # kill 83905 00:31:28.495 14:24:30 ftl -- common/autotest_common.sh@978 -- # wait 83905 00:31:29.871 14:24:31 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:29.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:29.871 Waiting for block devices as requested 00:31:30.133 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.133 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.133 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.395 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:35.682 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:35.682 14:24:37 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:35.682 Remove shared memory files 00:31:35.682 14:24:37 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:35.682 14:24:37 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:35.682 14:24:37 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:35.682 14:24:37 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:35.682 14:24:37 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:35.682 14:24:37 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:35.682 ************************************ 00:31:35.682 END TEST ftl 00:31:35.682 ************************************ 00:31:35.682 00:31:35.682 real 13m21.172s 00:31:35.682 user 15m18.883s 00:31:35.682 sys 1m25.524s 00:31:35.682 14:24:37 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.682 14:24:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:35.682 14:24:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:35.682 14:24:37 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:35.682 14:24:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:35.682 14:24:37 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:35.682 14:24:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:35.682 14:24:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:35.682 14:24:37 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:35.682 14:24:37 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:35.682 14:24:37 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:35.682 14:24:37 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:35.682 14:24:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.682 14:24:37 -- common/autotest_common.sh@10 -- # set +x 00:31:35.682 14:24:37 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:35.682 14:24:37 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:35.682 14:24:37 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:35.682 14:24:37 -- common/autotest_common.sh@10 -- # set +x 00:31:37.066 INFO: APP EXITING 00:31:37.066 INFO: killing all VMs 00:31:37.066 INFO: killing vhost app 00:31:37.066 INFO: EXIT DONE 00:31:37.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:37.638 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:37.638 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:37.638 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:37.638 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:37.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:38.473 Cleaning 00:31:38.473 Removing: /var/run/dpdk/spdk0/config 00:31:38.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:38.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:38.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:38.473 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:38.473 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:38.473 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:38.473 Removing: /var/run/dpdk/spdk0 00:31:38.473 Removing: /var/run/dpdk/spdk_pid56916 00:31:38.473 Removing: /var/run/dpdk/spdk_pid57118 00:31:38.473 Removing: /var/run/dpdk/spdk_pid57335 00:31:38.473 Removing: /var/run/dpdk/spdk_pid57429 00:31:38.473 Removing: /var/run/dpdk/spdk_pid57463 00:31:38.473 Removing: /var/run/dpdk/spdk_pid57586 00:31:38.473 Removing: /var/run/dpdk/spdk_pid57598 00:31:38.473 Removing: /var/run/dpdk/spdk_pid57797 00:31:38.473 Removing: /var/run/dpdk/spdk_pid57890 00:31:38.473 Removing: /var/run/dpdk/spdk_pid57986 00:31:38.473 Removing: /var/run/dpdk/spdk_pid58092 00:31:38.473 Removing: /var/run/dpdk/spdk_pid58189 00:31:38.473 Removing: /var/run/dpdk/spdk_pid58228 00:31:38.473 Removing: /var/run/dpdk/spdk_pid58265 00:31:38.473 Removing: /var/run/dpdk/spdk_pid58335 00:31:38.473 Removing: /var/run/dpdk/spdk_pid58436 00:31:38.473 Removing: /var/run/dpdk/spdk_pid58872 00:31:38.473 Removing: /var/run/dpdk/spdk_pid58925 00:31:38.473 Removing: /var/run/dpdk/spdk_pid58977 00:31:38.473 Removing: /var/run/dpdk/spdk_pid58993 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59084 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59100 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59197 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59213 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59266 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59284 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59342 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59360 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59520 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59557 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59640 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59823 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59907 00:31:38.473 Removing: /var/run/dpdk/spdk_pid59949 00:31:38.473 Removing: /var/run/dpdk/spdk_pid60408 00:31:38.474 Removing: /var/run/dpdk/spdk_pid60503 00:31:38.474 Removing: /var/run/dpdk/spdk_pid60612 00:31:38.474 Removing: /var/run/dpdk/spdk_pid60667 00:31:38.474 Removing: /var/run/dpdk/spdk_pid60687 00:31:38.474 Removing: /var/run/dpdk/spdk_pid60771 00:31:38.474 Removing: /var/run/dpdk/spdk_pid61398 00:31:38.474 Removing: /var/run/dpdk/spdk_pid61429 00:31:38.474 Removing: /var/run/dpdk/spdk_pid61901 00:31:38.474 Removing: /var/run/dpdk/spdk_pid61994 00:31:38.474 Removing: /var/run/dpdk/spdk_pid62114 00:31:38.474 Removing: /var/run/dpdk/spdk_pid62167 00:31:38.474 Removing: /var/run/dpdk/spdk_pid62192 00:31:38.474 Removing: /var/run/dpdk/spdk_pid62218 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64064 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64196 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64205 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64217 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64267 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64271 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64283 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64328 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64332 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64344 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64389 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64393 00:31:38.474 Removing: /var/run/dpdk/spdk_pid64405 00:31:38.474 Removing: /var/run/dpdk/spdk_pid65790 00:31:38.474 Removing: /var/run/dpdk/spdk_pid65887 00:31:38.474 Removing: /var/run/dpdk/spdk_pid67287 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69065 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69128 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69203 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69314 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69400 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69496 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69570 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69646 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69749 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69842 00:31:38.474 Removing: /var/run/dpdk/spdk_pid69932 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70002 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70076 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70186 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70276 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70373 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70442 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70517 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70627 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70713 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70809 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70877 00:31:38.474 Removing: /var/run/dpdk/spdk_pid70957 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71030 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71100 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71203 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71298 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71394 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71463 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71538 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71612 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71686 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71795 00:31:38.474 Removing: /var/run/dpdk/spdk_pid71880 00:31:38.474 Removing: /var/run/dpdk/spdk_pid72024 00:31:38.474 Removing: /var/run/dpdk/spdk_pid72308 00:31:38.474 Removing: /var/run/dpdk/spdk_pid72340 00:31:38.474 Removing: /var/run/dpdk/spdk_pid72780 00:31:38.474 Removing: /var/run/dpdk/spdk_pid72960 00:31:38.474 Removing: /var/run/dpdk/spdk_pid73060 00:31:38.474 Removing: /var/run/dpdk/spdk_pid73170 00:31:38.735 Removing: /var/run/dpdk/spdk_pid73215 00:31:38.735 Removing: /var/run/dpdk/spdk_pid73245 00:31:38.735 Removing: /var/run/dpdk/spdk_pid73546 00:31:38.735 Removing: /var/run/dpdk/spdk_pid73603 00:31:38.735 Removing: /var/run/dpdk/spdk_pid73670 00:31:38.735 Removing: /var/run/dpdk/spdk_pid74057 00:31:38.735 Removing: /var/run/dpdk/spdk_pid74203 00:31:38.735 Removing: /var/run/dpdk/spdk_pid75018 00:31:38.735 Removing: /var/run/dpdk/spdk_pid75147 00:31:38.735 Removing: /var/run/dpdk/spdk_pid75306 00:31:38.735 Removing: /var/run/dpdk/spdk_pid75403 00:31:38.735 Removing: /var/run/dpdk/spdk_pid75718 00:31:38.735 Removing: /var/run/dpdk/spdk_pid75964 00:31:38.735 Removing: /var/run/dpdk/spdk_pid76337 00:31:38.735 Removing: /var/run/dpdk/spdk_pid76548 00:31:38.735 Removing: /var/run/dpdk/spdk_pid76737 00:31:38.735 Removing: /var/run/dpdk/spdk_pid76790 00:31:38.735 Removing: /var/run/dpdk/spdk_pid76955 00:31:38.735 Removing: /var/run/dpdk/spdk_pid76991 00:31:38.735 Removing: /var/run/dpdk/spdk_pid77044 00:31:38.735 Removing: /var/run/dpdk/spdk_pid77297 00:31:38.735 Removing: /var/run/dpdk/spdk_pid77528 00:31:38.735 Removing: /var/run/dpdk/spdk_pid77988 00:31:38.735 Removing: /var/run/dpdk/spdk_pid78694 00:31:38.735 Removing: /var/run/dpdk/spdk_pid79383 00:31:38.735 Removing: /var/run/dpdk/spdk_pid80175 00:31:38.735 Removing: /var/run/dpdk/spdk_pid80323 00:31:38.735 Removing: /var/run/dpdk/spdk_pid80404 00:31:38.735 Removing: /var/run/dpdk/spdk_pid80779 00:31:38.735 Removing: /var/run/dpdk/spdk_pid80832 00:31:38.735 Removing: /var/run/dpdk/spdk_pid81508 00:31:38.735 Removing: /var/run/dpdk/spdk_pid82107 00:31:38.735 Removing: /var/run/dpdk/spdk_pid82912 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83038 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83081 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83138 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83190 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83248 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83444 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83524 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83593 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83660 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83689 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83784 00:31:38.735 Removing: /var/run/dpdk/spdk_pid83905 00:31:38.735 Clean 00:31:38.735 14:24:40 -- common/autotest_common.sh@1453 -- # return 0 00:31:38.735 14:24:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:38.735 14:24:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:38.735 14:24:40 -- common/autotest_common.sh@10 -- # set +x 00:31:38.735 14:24:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:38.735 14:24:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:38.735 14:24:40 -- common/autotest_common.sh@10 -- # set +x 00:31:38.997 14:24:40 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:38.997 14:24:40 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:38.997 14:24:40 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:38.997 14:24:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:38.997 14:24:40 -- spdk/autotest.sh@398 -- # hostname 00:31:38.997 14:24:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:38.997 geninfo: WARNING: invalid characters removed from testname! 00:32:05.582 14:25:05 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:08.132 14:25:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:10.678 14:25:12 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:13.982 14:25:15 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:15.891 14:25:17 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:18.437 14:25:19 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:20.984 14:25:22 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:20.984 14:25:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:20.984 14:25:22 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:20.984 14:25:22 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:20.984 14:25:22 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:20.984 14:25:22 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:20.984 + [[ -n 5027 ]] 00:32:20.984 + sudo kill 5027 00:32:20.997 [Pipeline] } 00:32:21.012 [Pipeline] // timeout 00:32:21.017 [Pipeline] } 00:32:21.027 [Pipeline] // stage 00:32:21.031 [Pipeline] } 00:32:21.041 [Pipeline] // catchError 00:32:21.048 [Pipeline] stage 00:32:21.049 [Pipeline] { (Stop VM) 00:32:21.058 [Pipeline] sh 00:32:21.343 + vagrant halt 00:32:23.913 ==> default: Halting domain... 00:32:30.510 [Pipeline] sh 00:32:30.791 + vagrant destroy -f 00:32:33.336 ==> default: Removing domain... 00:32:33.921 [Pipeline] sh 00:32:34.207 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:34.218 [Pipeline] } 00:32:34.233 [Pipeline] // stage 00:32:34.238 [Pipeline] } 00:32:34.252 [Pipeline] // dir 00:32:34.257 [Pipeline] } 00:32:34.272 [Pipeline] // wrap 00:32:34.278 [Pipeline] } 00:32:34.291 [Pipeline] // catchError 00:32:34.300 [Pipeline] stage 00:32:34.302 [Pipeline] { (Epilogue) 00:32:34.316 [Pipeline] sh 00:32:34.603 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:39.895 [Pipeline] catchError 00:32:39.897 [Pipeline] { 00:32:39.909 [Pipeline] sh 00:32:40.197 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:40.197 Artifacts sizes are good 00:32:40.208 [Pipeline] } 00:32:40.222 [Pipeline] // catchError 00:32:40.233 [Pipeline] archiveArtifacts 00:32:40.240 Archiving artifacts 00:32:40.354 [Pipeline] cleanWs 00:32:40.366 [WS-CLEANUP] Deleting project workspace... 00:32:40.366 [WS-CLEANUP] Deferred wipeout is used... 00:32:40.374 [WS-CLEANUP] done 00:32:40.376 [Pipeline] } 00:32:40.391 [Pipeline] // stage 00:32:40.396 [Pipeline] } 00:32:40.410 [Pipeline] // node 00:32:40.416 [Pipeline] End of Pipeline 00:32:40.459 Finished: SUCCESS